Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.06786
Cited By
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
15 August 2020
Ben Adlam
Jeffrey Pennington
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization"
31 / 31 papers shown
Title
auto-fpt: Automating Free Probability Theory Calculations for Machine Learning Theory
Arjun Subramonian
Elvis Dohmatob
26
0
0
14 Apr 2025
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Chenyang Zhang
Peifeng Gao
Difan Zou
Yuan Cao
OOD
MLT
59
0
0
11 Apr 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
Cengiz Pehlevan
AI4CE
64
1
0
04 Feb 2025
High dimensional analysis reveals conservative sharpening and a stochastic edge of stability
Atish Agarwala
Jeffrey Pennington
41
3
0
30 Apr 2024
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Yufan Li
Subhabrata Sen
Ben Adlam
MLT
51
1
0
18 Apr 2024
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Yan Sun
MLT
40
19
0
11 Oct 2023
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
40
1
0
13 Sep 2023
How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features
Simone Bombari
Marco Mondelli
AAML
28
4
0
20 May 2023
Subsample Ridge Ensembles: Equivalences and Generalized Cross-Validation
Jin-Hong Du
Pratik V. Patil
Arun K. Kuchibhotla
24
11
0
25 Apr 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
Cengiz Pehlevan
MLT
38
29
0
06 Apr 2023
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels
Simone Bombari
Shayan Kiyani
Marco Mondelli
AAML
40
10
0
03 Feb 2023
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Yan Sun
Hamed Hassani
21
8
0
31 Jan 2023
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
Second-order regression models exhibit progressive sharpening to the edge of stability
Atish Agarwala
Fabian Pedregosa
Jeffrey Pennington
35
26
0
10 Oct 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
21
0
0
25 Jul 2022
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
30
11
0
03 Jun 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
53
15
0
13 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
40
121
0
03 May 2022
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
38
1
0
07 Dec 2021
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wei Cao
Zhenguo Li
UQCV
AAML
41
19
0
07 Dec 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Zhichao Wang
Yizhe Zhu
35
18
0
20 Sep 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
49
229
0
27 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
32
43
0
04 Jul 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
Arthur Gretton
MLT
33
35
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
25
9
0
04 Jun 2021
Appearance of Random Matrix Theory in Deep Learning
Nicholas P. Baskerville
Diego Granziol
J. Keating
15
11
0
12 Feb 2021
Explaining Neural Scaling Laws
Yasaman Bahri
Ethan Dyer
Jared Kaplan
Jaehoon Lee
Utkarsh Sharma
27
250
0
12 Feb 2021
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
39
93
0
04 Nov 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
44
71
0
25 May 2020
1