ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.15731
  4. Cited By
Understanding the differences in Foundation Models: Attention, State
  Space Models, and Recurrent Neural Networks

Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks

24 May 2024
Jerome Sieber
Carmen Amo Alonso
A. Didier
M. Zeilinger
Antonio Orvieto
    AAML
ArXivPDFHTML

Papers citing "Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks"

11 / 11 papers shown
Title
Block-Biased Mamba for Long-Range Sequence Processing
Block-Biased Mamba for Long-Range Sequence Processing
Annan Yu
N. Benjamin Erichson
Mamba
37
0
0
13 May 2025
NIMBA: Towards Robust and Principled Processing of Point Clouds With
  SSMs
NIMBA: Towards Robust and Principled Processing of Point Clouds With SSMs
Nursena Köprücü
Destiny Okpekpe
Antonio Orvieto
Mamba
36
1
0
31 Oct 2024
Making Every Frame Matter: Continuous Activity Recognition in Streaming Video via Adaptive Video Context Modeling
Making Every Frame Matter: Continuous Activity Recognition in Streaming Video via Adaptive Video Context Modeling
Hao Wu
Donglin Bai
Shiqi Jiang
Qianxi Zhang
Y. Yang
Ting Cao
Fengyuan Xu
Yunxin Liu
Fengyuan Xu
148
0
0
19 Oct 2024
Lambda-Skip Connections: the architectural component that prevents Rank Collapse
Lambda-Skip Connections: the architectural component that prevents Rank Collapse
Federico Arangath Joseph
Jerome Sieber
M. Zeilinger
Carmen Amo Alonso
33
0
0
14 Oct 2024
Can Mamba Always Enjoy the "Free Lunch"?
Can Mamba Always Enjoy the "Free Lunch"?
Ruifeng Ren
Zhicong Li
Yong Liu
39
1
0
04 Oct 2024
What comes after transformers? -- A selective survey connecting ideas in
  deep learning
What comes after transformers? -- A selective survey connecting ideas in deep learning
Johannes Schneider
AI4CE
35
2
0
01 Aug 2024
HGRN2: Gated Linear RNNs with State Expansion
HGRN2: Gated Linear RNNs with State Expansion
Zhen Qin
Songlin Yang
Weixuan Sun
Xuyang Shen
Dong Li
Weigao Sun
Yiran Zhong
LRM
44
47
0
11 Apr 2024
State Space Models as Foundation Models: A Control Theoretic Overview
State Space Models as Foundation Models: A Control Theoretic Overview
Carmen Amo Alonso
Jerome Sieber
M. Zeilinger
AI4CE
Mamba
36
13
0
25 Mar 2024
Zoology: Measuring and Improving Recall in Efficient Language Models
Zoology: Measuring and Improving Recall in Efficient Language Models
Simran Arora
Sabri Eyuboglu
Aman Timalsina
Isys Johnson
Michael Poli
James Zou
Atri Rudra
Christopher Ré
61
66
0
08 Dec 2023
Resurrecting Recurrent Neural Networks for Long Sequences
Resurrecting Recurrent Neural Networks for Long Sequences
Antonio Orvieto
Samuel L. Smith
Albert Gu
Anushan Fernando
Çağlar Gülçehre
Razvan Pascanu
Soham De
88
265
0
11 Mar 2023
Structured State Space Models for In-Context Reinforcement Learning
Structured State Space Models for In-Context Reinforcement Learning
Chris Xiaoxuan Lu
Yannick Schroecker
Albert Gu
Emilio Parisotto
Jakob N. Foerster
Satinder Singh
Feryal M. P. Behbahani
AI4TS
94
81
0
07 Mar 2023
1