Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.21064
Cited By
Recurrent neural networks: vanishing and exploding gradients are not the end of the story
31 May 2024
Nicolas Zucchet
Antonio Orvieto
ODL
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Recurrent neural networks: vanishing and exploding gradients are not the end of the story"
7 / 7 papers shown
Title
Weight-Space Linear Recurrent Neural Networks
Roussel Desmond Nzoyem
Nawid Keshtmand
Idriss Tsayem
David A.W. Barton
Tom Deakin
32
0
0
01 Jun 2025
In Search of Adam's Secret Sauce
Antonio Orvieto
Robert Gower
25
1
0
27 May 2025
Revisiting Glorot Initialization for Long-Range Linear Recurrences
Noga Bar
Mariia Seleznova
Yotam Alexander
Gitta Kutyniok
Raja Giryes
12
0
0
26 May 2025
Scalable Graph Generative Modeling via Substructure Sequences
Zehong Wang
Zheyuan Zhang
Tianyi Ma
Chuxu Zhang
Yanfang Ye
AI4CE
66
0
0
22 May 2025
Dynamically Learning to Integrate in Recurrent Neural Networks
Blake Bordelon
Jordan Cotler
Cengiz Pehlevan
Jacob A. Zavatone-Veth
96
3
0
24 Mar 2025
Fixed-Point RNNs: From Diagonal to Dense in a Few Iterations
Sajad Movahedi
Felix Sarnthein
Nicola Muca Cirone
Antonio Orvieto
103
5
0
13 Mar 2025
Theoretical Foundations of Deep Selective State-Space Models
Nicola Muca Cirone
Antonio Orvieto
Benjamin Walker
C. Salvi
Terry Lyons
Mamba
274
34
0
29 Feb 2024
1