ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.02157
  4. Cited By
The Influence of Learning Rule on Representation Dynamics in Wide Neural
  Networks

The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks

5 October 2022
Blake Bordelon
Cengiz Pehlevan
ArXivPDFHTML

Papers citing "The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks"

11 / 11 papers shown
Title
Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation
Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation
Satoki Ishikawa
Rio Yokota
Ryo Karakida
66
0
0
04 Nov 2024
Optimal Protocols for Continual Learning via Statistical Physics and Control Theory
Optimal Protocols for Continual Learning via Statistical Physics and Control Theory
Francesco Mori
Stefano Sarao Mannelli
Francesca Mignacco
101
3
0
26 Sep 2024
The high-dimensional asymptotics of first order methods with random data
The high-dimensional asymptotics of first order methods with random data
Michael Celentano
Chen Cheng
Andrea Montanari
AI4CE
20
37
0
14 Dec 2021
Neural Networks as Kernel Learners: The Silent Alignment Effect
Neural Networks as Kernel Learners: The Silent Alignment Effect
Alexander B. Atanasov
Blake Bordelon
Cengiz Pehlevan
MLT
56
79
0
29 Oct 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
52
9
0
15 Jun 2021
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and
  Architectures
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
54
63
0
23 Jun 2020
Gaussian Gated Linear Networks
Gaussian Gated Linear Networks
David Budden
Adam H. Marblestone
Eren Sezener
Tor Lattimore
Greg Wayne
J. Veness
BDL
AI4CE
37
12
0
10 Jun 2020
Gated Linear Networks
Gated Linear Networks
William H. Guss
Tor Lattimore
David Budden
Avishkar Bhoopchand
Christopher Mattern
...
Ruslan Salakhutdinov
Jianan Wang
Peter Toth
Simon Schmitt
Marcus Hutter
AI4CE
82
40
0
30 Sep 2019
Finding the Needle in the Haystack with Convolutions: on the benefits of
  architectural bias
Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias
Stéphane dÁscoli
Levent Sagun
Joan Bruna
Giulio Biroli
51
37
0
16 Jun 2019
Assessing the Scalability of Biologically-Motivated Deep Learning
  Algorithms and Architectures
Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures
Sergey Bartunov
Adam Santoro
Blake A. Richards
Luke Marris
Geoffrey E. Hinton
Timothy Lillicrap
82
242
0
12 Jul 2018
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
128
1,830
0
20 Dec 2013
1