ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.09653
  4. Cited By
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks

Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks

19 May 2022
Blake Bordelon
C. Pehlevan
    MLT
ArXivPDFHTML

Papers citing "Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks"

50 / 61 papers shown
Title
Feature Learning Beyond the Edge of Stability
Feature Learning Beyond the Edge of Stability
Dávid Terjék
MLT
41
0
0
20 May 2025
Don't be lazy: CompleteP enables compute-efficient deep transformers
Don't be lazy: CompleteP enables compute-efficient deep transformers
Nolan Dey
Bin Claire Zhang
Lorenzo Noci
Mufan Bill Li
Blake Bordelon
Shane Bergsma
C. Pehlevan
Boris Hanin
Joel Hestness
39
0
0
02 May 2025
Neuronal correlations shape the scaling behavior of memory capacity and nonlinear computational capability of recurrent neural networks
Neuronal correlations shape the scaling behavior of memory capacity and nonlinear computational capability of recurrent neural networks
Shotaro Takasu
Toshio Aoyagi
31
0
0
28 Apr 2025
Generalization through variance: how noise shapes inductive biases in diffusion models
Generalization through variance: how noise shapes inductive biases in diffusion models
John J. Vastola
DiffM
152
1
0
16 Apr 2025
Deep Neural Nets as Hamiltonians
Deep Neural Nets as Hamiltonians
Mike Winer
Boris Hanin
133
0
0
31 Mar 2025
Dynamically Learning to Integrate in Recurrent Neural Networks
Dynamically Learning to Integrate in Recurrent Neural Networks
Blake Bordelon
Jordan Cotler
C. Pehlevan
Jacob A. Zavatone-Veth
53
2
0
24 Mar 2025
Learning richness modulates equality reasoning in neural networks
William L. Tong
C. Pehlevan
44
0
0
12 Mar 2025
Global Convergence and Rich Feature Learning in LLL-Layer Infinite-Width Neural Networks under μμμP Parametrization
Zixiang Chen
Greg Yang
Qingyue Zhao
Q. Gu
MLT
50
0
0
12 Mar 2025
A Theory of Initialisation's Impact on Specialisation
Devon Jarvis
Sebastian Lee
Clémentine Dominé
Andrew M. Saxe
Stefano Sarao Mannelli
CLL
69
2
0
04 Mar 2025
Function-Space Learning Rates
Edward Milsom
Ben Anson
Laurence Aitchison
57
1
0
24 Feb 2025
Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos
Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos
Dayal Singh Kalra
Tianyu He
M. Barkeshli
49
4
0
17 Feb 2025
The Complexity of Learning Sparse Superposed Features with Feedback
The Complexity of Learning Sparse Superposed Features with Feedback
Akash Kumar
152
0
0
08 Feb 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
C. Pehlevan
AI4CE
64
1
0
04 Feb 2025
Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory
  Cortex
Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex
Tanishq Kumar
Blake Bordelon
C. Pehlevan
Venkatesh N. Murthy
Samuel Gershman
OOD
CLL
SSL
48
0
0
05 Nov 2024
Local Loss Optimization in the Infinite Width: Stable Parameterization
  of Predictive Coding Networks and Target Propagation
Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation
Satoki Ishikawa
Rio Yokota
Ryo Karakida
46
0
0
04 Nov 2024
How Does Critical Batch Size Scale in Pre-training?
How Does Critical Batch Size Scale in Pre-training?
Hanlin Zhang
Depen Morwani
Nikhil Vyas
Jingfeng Wu
Difan Zou
Udaya Ghai
Dean Phillips Foster
Sham Kakade
75
8
0
29 Oct 2024
Estimating the Spectral Moments of the Kernel Integral Operator from Finite Sample Matrices
Estimating the Spectral Moments of the Kernel Integral Operator from Finite Sample Matrices
Chanwoo Chun
SueYeon Chung
Daniel D. Lee
26
1
0
23 Oct 2024
The Optimization Landscape of SGD Across the Feature Learning Strength
The Optimization Landscape of SGD Across the Feature Learning Strength
Alexander B. Atanasov
Alexandru Meterez
James B. Simon
C. Pehlevan
43
2
0
06 Oct 2024
Optimal Protocols for Continual Learning via Statistical Physics and Control Theory
Optimal Protocols for Continual Learning via Statistical Physics and Control Theory
Francesco Mori
Stefano Sarao Mannelli
Francesca Mignacco
28
3
0
26 Sep 2024
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
57
12
0
26 Sep 2024
TASI Lectures on Physics for Machine Learning
TASI Lectures on Physics for Machine Learning
Jim Halverson
30
1
0
31 Jul 2024
A spring-block theory of feature learning in deep neural networks
A spring-block theory of feature learning in deep neural networks
Chengzhi Shi
Liming Pan
Ivan Dokmanić
AI4CE
40
1
0
28 Jul 2024
Coding schemes in neural networks learning classification tasks
Coding schemes in neural networks learning classification tasks
Alexander van Meegen
H. Sompolinsky
33
6
0
24 Jun 2024
Get rich quick: exact solutions reveal how unbalanced initializations
  promote rapid feature learning
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
D. Kunin
Allan Raventós
Clémentine Dominé
Feng Chen
David Klindt
Andrew M. Saxe
Surya Ganguli
MLT
45
15
0
10 Jun 2024
Understanding and Minimising Outlier Features in Neural Network Training
Understanding and Minimising Outlier Features in Neural Network Training
Bobby He
Lorenzo Noci
Daniele Paliotta
Imanol Schlag
Thomas Hofmann
36
3
0
29 May 2024
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Zhenfeng Tu
Santiago Aranguri
Arthur Jacot
26
8
0
27 May 2024
Bayesian RG Flow in Neural Network Field Theories
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
68
1
0
27 May 2024
Dissecting the Interplay of Attention Paths in a Statistical Mechanics
  Theory of Transformers
Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers
Lorenzo Tiberi
Francesca Mignacco
Kazuki Irie
H. Sompolinsky
44
6
0
24 May 2024
Infinite Limits of Multi-head Transformer Dynamics
Infinite Limits of Multi-head Transformer Dynamics
Blake Bordelon
Hamza Tahir Chaudhry
C. Pehlevan
AI4CE
44
9
0
24 May 2024
Flexible infinite-width graph convolutional networks and the importance
  of representation learning
Flexible infinite-width graph convolutional networks and the importance of representation learning
Ben Anson
Edward Milsom
Laurence Aitchison
SSL
GNN
24
1
0
09 Feb 2024
Towards Understanding Inductive Bias in Transformers: A View From
  Infinity
Towards Understanding Inductive Bias in Transformers: A View From Infinity
Itay Lavie
Guy Gur-Ari
Z. Ringel
32
1
0
07 Feb 2024
A Dynamical Model of Neural Scaling Laws
A Dynamical Model of Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
51
36
0
02 Feb 2024
On the Parameterization of Second-Order Optimization Effective Towards
  the Infinite Width
On the Parameterization of Second-Order Optimization Effective Towards the Infinite Width
Satoki Ishikawa
Ryo Karakida
24
2
0
19 Dec 2023
Meta-Learning Strategies through Value Maximization in Neural Networks
Meta-Learning Strategies through Value Maximization in Neural Networks
Rodrigo Carrasco-Davis
Javier Masís
Andrew M. Saxe
27
1
0
30 Oct 2023
A Spectral Condition for Feature Learning
A Spectral Condition for Feature Learning
Greg Yang
James B. Simon
Jeremy Bernstein
22
25
0
26 Oct 2023
Grokking as a First Order Phase Transition in Two Layer Networks
Grokking as a First Order Phase Transition in Two Layer Networks
Noa Rubin
Inbar Seroussi
Z. Ringel
37
15
0
05 Oct 2023
Commutative Width and Depth Scaling in Deep Neural Networks
Commutative Width and Depth Scaling in Deep Neural Networks
Soufiane Hayou
41
2
0
02 Oct 2023
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and
  Scaling Limit
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit
Blake Bordelon
Lorenzo Noci
Mufan Bill Li
Boris Hanin
C. Pehlevan
27
23
0
28 Sep 2023
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Yehonatan Avidan
Qianyi Li
H. Sompolinsky
60
8
0
08 Sep 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
23
11
0
12 Jul 2023
Loss Dynamics of Temporal Difference Reinforcement Learning
Loss Dynamics of Temporal Difference Reinforcement Learning
Blake Bordelon
P. Masset
Henry Kuo
C. Pehlevan
AI4CE
21
0
0
10 Jul 2023
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Zhengdao Chen
41
1
0
03 Jul 2023
Synaptic Weight Distributions Depend on the Geometry of Plasticity
Synaptic Weight Distributions Depend on the Geometry of Plasticity
Roman Pogodin
Jonathan H. Cornford
Arna Ghosh
Gauthier Gidel
Guillaume Lajoie
Blake A. Richards
23
4
0
30 May 2023
Feature-Learning Networks Are Consistent Across Widths At Realistic
  Scales
Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Nikhil Vyas
Alexander B. Atanasov
Blake Bordelon
Depen Morwani
Sabarish Sainathan
C. Pehlevan
24
22
0
28 May 2023
Introduction to dynamical mean-field theory of randomly connected neural
  networks with bidirectionally correlated couplings
Introduction to dynamical mean-field theory of randomly connected neural networks with bidirectionally correlated couplings
Wenxuan Zou
Haiping Huang
AI4CE
19
9
0
15 May 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
On the Stepwise Nature of Self-Supervised Learning
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
32
29
0
27 Mar 2023
Phase diagram of early training dynamics in deep neural networks: effect
  of the learning rate, depth, and width
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width
Dayal Singh Kalra
M. Barkeshli
15
9
0
23 Feb 2023
Neural networks learn to magnify areas near decision boundaries
Neural networks learn to magnify areas near decision boundaries
Jacob A. Zavatone-Veth
Sheng Yang
Julian Rubinfien
C. Pehlevan
MLT
AI4CE
22
6
0
26 Jan 2023
Mechanism of feature learning in deep fully connected networks and
  kernel machines that recursively learn features
Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features
Adityanarayanan Radhakrishnan
Daniel Beaglehole
Parthe Pandit
M. Belkin
FAtt
MLT
29
11
0
28 Dec 2022
12
Next