Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1909.11304
Cited By
Asymptotics of Wide Networks from Feynman Diagrams
25 September 2019
Ethan Dyer
Guy Gur-Ari
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Asymptotics of Wide Networks from Feynman Diagrams"
36 / 36 papers shown
Title
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
Cengiz Pehlevan
AI4CE
64
1
0
04 Feb 2025
Equivariant Neural Tangent Kernels
Philipp Misof
Pan Kessel
Jan E. Gerken
64
0
0
10 Jun 2024
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
68
1
0
27 May 2024
Random ReLU Neural Networks as Non-Gaussian Processes
Rahul Parhi
Pakshal Bohra
Ayoub El Biari
Mehrsa Pourya
Michael Unser
63
1
0
16 May 2024
A theory of data variability in Neural Network Bayesian inference
Javed Lindner
David Dahmen
Michael Krämer
M. Helias
BDL
32
1
0
31 Jul 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
38
29
0
06 Apr 2023
Effective Theory of Transformers at Initialization
Emily Dinan
Sho Yaida
Susan Zhang
30
14
0
04 Apr 2023
Catapult Dynamics and Phase Transitions in Quadratic Nets
David Meltzer
Junyu Liu
27
9
0
18 Jan 2023
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
36
51
0
30 Oct 2022
Meta-Principled Family of Hyperparameter Scaling Strategies
Sho Yaida
58
16
0
10 Oct 2022
On the universal consistency of an over-parametrized deep neural network estimate learned by gradient descent
Selina Drews
Michael Kohler
30
13
0
30 Aug 2022
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
22
114
0
30 Jun 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
40
78
0
19 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
40
121
0
03 May 2022
On Feature Learning in Neural Networks with Global Convergence Guarantees
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
36
13
0
22 Apr 2022
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Zohar Ringel
33
50
0
31 Dec 2021
Rate of Convergence of Polynomial Networks to Gaussian Processes
Adam Klukowski
10
14
0
04 Nov 2021
Neural Networks as Kernel Learners: The Silent Alignment Effect
Alexander B. Atanasov
Blake Bordelon
Cengiz Pehlevan
MLT
26
75
0
29 Oct 2021
Nonperturbative renormalization for the neural network-QFT correspondence
Harold Erbin
Vincent Lahoche
D. O. Samary
41
30
0
03 Aug 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
51
229
0
27 Jul 2021
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Zohar Ringel
SSL
MLT
36
31
0
08 Jun 2021
Explaining Neural Scaling Laws
Yasaman Bahri
Ethan Dyer
Jared Kaplan
Jaehoon Lee
Utkarsh Sharma
27
250
0
12 Feb 2021
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Whitening and second order optimization both make information in the dataset unusable during training, and can reduce or prevent generalization
Neha S. Wadia
Daniel Duckworth
S. Schoenholz
Ethan Dyer
Jascha Narain Sohl-Dickstein
27
13
0
17 Aug 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
58
134
0
25 Jun 2020
On the training dynamics of deep networks with
L
2
L_2
L
2
regularization
Aitor Lewkowycz
Guy Gur-Ari
41
53
0
15 Jun 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
44
71
0
25 May 2020
Predicting the outputs of finite deep neural networks trained with noisy gradients
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Zohar Ringel
19
20
0
02 Apr 2020
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
32
47
0
21 Jan 2020
Neural Spectrum Alignment: Empirical Study
Dmitry Kopitkov
Vadim Indelman
29
14
0
19 Oct 2019
Non-Gaussian processes and neural networks at finite widths
Sho Yaida
15
88
0
30 Sep 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
24
150
0
13 Sep 2019
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
49
195
0
06 Jan 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
233
348
0
14 Jun 2018
1