Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2007.12826
Cited By
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
25 July 2020
Andrea Montanari
Yiqiao Zhong
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training"
24 / 74 papers shown
Title
A Framework for Overparameterized Learning
Dávid Terjék
Diego González-Sánchez
MLT
11
1
0
26 May 2022
Quadratic models for understanding catapult dynamics of neural networks
Libin Zhu
Chaoyue Liu
Adityanarayanan Radhakrishnan
M. Belkin
27
13
0
24 May 2022
Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture
Libin Zhu
Chaoyue Liu
M. Belkin
GNN
AI4CE
15
4
0
24 May 2022
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization
Simone Bombari
Mohammad Hossein Amani
Marco Mondelli
20
26
0
20 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
31
121
0
03 May 2022
Adversarial Examples in Random Neural Networks with General Activations
Andrea Montanari
Yuchen Wu
GAN
AAML
74
13
0
31 Mar 2022
An Empirical Study of Memorization in NLP
Xiaosen Zheng
Jing Jiang
TDI
17
24
1
23 Mar 2022
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes
Elvis Dohmatob
A. Bietti
AAML
21
13
0
22 Mar 2022
Universality of empirical risk minimization
Andrea Montanari
Basil Saeed
OOD
25
73
0
17 Feb 2022
Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao
Zixiang Chen
M. Belkin
Quanquan Gu
MLT
19
82
0
14 Feb 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
34
69
0
11 Feb 2022
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Zhichao Wang
Yizhe Zhu
25
18
0
20 Sep 2021
Deep Networks Provably Classify Data on Curves
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
23
9
0
29 Jul 2021
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
Huiyuan Wang
Wei Lin
MLT
24
4
0
09 Jun 2021
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
19
9
0
04 Jun 2021
A Geometric Analysis of Neural Collapse with Unconstrained Features
Zhihui Zhu
Tianyu Ding
Jinxin Zhou
Xiao Li
Chong You
Jeremias Sulam
Qing Qu
16
195
0
06 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
4
51
0
28 Apr 2021
A Recipe for Global Convergence Guarantee in Deep Neural Networks
Kenji Kawaguchi
Qingyun Sun
16
11
0
12 Apr 2021
When Are Solutions Connected in Deep Networks?
Quynh N. Nguyen
Pierre Bréchet
Marco Mondelli
22
9
0
18 Feb 2021
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
Kenji Kawaguchi
PINN
20
41
0
15 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
22
81
0
21 Dec 2020
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
23
159
0
29 Sep 2020
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Large-time asymptotics in deep learning
Carlos Esteve
Borjan Geshkovski
Dario Pighin
Enrique Zuazua
14
34
0
06 Aug 2020
Previous
1
2