Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1908.05355
Cited By
v1
v2
v3
v4
v5 (latest)
The generalization error of random features regression: Precise asymptotics and double descent curve
14 August 2019
Song Mei
Andrea Montanari
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The generalization error of random features regression: Precise asymptotics and double descent curve"
50 / 227 papers shown
Title
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
152
43
0
26 Oct 2020
Train simultaneously, generalize better: Stability of gradient-based minimax learners
Farzan Farnia
Asuman Ozdaglar
73
48
0
23 Oct 2020
Precise High-Dimensional Asymptotics for Quantifying Heterogeneous Transfers
Fan Yang
Hongyang R. Zhang
Sen Wu
Christopher Ré
Weijie J. Su
159
11
0
22 Oct 2020
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
105
63
0
21 Oct 2020
What causes the test error? Going beyond bias-variance via ANOVA
Licong Lin
Yan Sun
93
34
0
11 Oct 2020
Strong replica symmetry for high-dimensional disordered log-concave Gibbs measures
Jean Barbier
D. Panchenko
Manuel Sáenz
90
10
0
27 Sep 2020
On the proliferation of support vectors in high dimensions
Daniel J. Hsu
Vidya Muthukumar
Ji Xu
97
45
0
22 Sep 2020
Distributional Generalization: A New Kind of Generalization
Preetum Nakkiran
Yamini Bansal
OOD
80
42
0
17 Sep 2020
Asymptotics of Wide Convolutional Neural Networks
Anders Andreassen
Ethan Dyer
74
23
0
19 Aug 2020
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam
Jeffrey Pennington
58
125
0
15 Aug 2020
Provable More Data Hurt in High Dimensional Least Squares Estimator
Zeng Li
Chuanlong Xie
Qinwen Wang
74
6
0
14 Aug 2020
The Slow Deterioration of the Generalization Error of the Random Feature Model
Chao Ma
Lei Wu
E. Weinan
87
15
0
13 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
162
61
0
03 Aug 2020
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
187
97
0
25 Jul 2020
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
Reinhard Heckel
Fatih Yilmaz
80
45
0
20 Jul 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
96
14
0
02 Jul 2020
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
110
107
0
25 Jun 2020
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks
Abdulkadir Canatar
Blake Bordelon
Cengiz Pehlevan
156
190
0
23 Jun 2020
On Sparsity in Overparametrised Shallow ReLU Networks
Jaume de Dios
Joan Bruna
63
14
0
18 Jun 2020
Kernel Alignment Risk Estimator: Risk Prediction from Training Data
Arthur Jacot
Berfin cSimcsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
80
68
0
17 Jun 2020
Reservoir Computing meets Recurrent Kernels and Structured Transforms
Jonathan Dong
Ruben Ohana
M. Rafayelyan
Florent Krzakala
TPM
74
20
0
12 Jun 2020
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
Yehuda Dar
Richard G. Baraniuk
174
19
0
12 Jun 2020
Asymptotic Errors for Teacher-Student Convex Generalized Linear Models (or : How to Prove Kabashima's Replica Formula)
Cédric Gerbelot
A. Abbara
Florent Krzakala
79
49
0
11 Jun 2020
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization
Benjamin Aubin
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
74
56
0
11 Jun 2020
Asymptotics of Ridge (less) Regression under General Source Condition
Dominic Richards
Jaouad Mourtada
Lorenzo Rosasco
88
73
0
11 Jun 2020
On Uniform Convergence and Low-Norm Interpolation Learning
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
69
30
0
10 Jun 2020
On the Optimal Weighted
ℓ
2
\ell_2
ℓ
2
Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
75
123
0
10 Jun 2020
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent
Zhenyu Liao
Romain Couillet
Michael W. Mahoney
91
93
0
09 Jun 2020
Halting Time is Predictable for Large Models: A Universality Property and Average-case Analysis
Courtney Paquette
B. V. Merrienboer
Elliot Paquette
Fabian Pedregosa
99
27
0
08 Jun 2020
Triple descent and the two kinds of overfitting: Where & why do they appear?
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
85
80
0
05 Jun 2020
Optimal Learning with Excitatory and Inhibitory synapses
Alessandro Ingrosso
46
5
0
25 May 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
109
74
0
25 May 2020
Model Repair: Robust Recovery of Over-Parameterized Statistical Models
Chao Gao
John D. Lafferty
42
6
0
20 May 2020
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
114
151
0
16 May 2020
An Investigation of Why Overparameterization Exacerbates Spurious Correlations
Shiori Sagawa
Aditi Raghunathan
Pang Wei Koh
Percy Liang
206
383
0
09 May 2020
Generalization Error of Generalized Linear Models in High Dimensions
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
AI4CE
59
39
0
01 May 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji
Philip M. Long
95
109
0
25 Apr 2020
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
124
176
0
23 Apr 2020
Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
Tengyuan Liang
Hai Tran-Bach
48
11
0
09 Apr 2020
Regularization in High-Dimensional Regression and Classification via Random Matrix Theory
Panagiotis Lolas
84
14
0
30 Mar 2020
Getting Better from Worse: Augmented Bagging and a Cautionary Tale of Variable Importance
L. Mentch
Siyu Zhou
102
14
0
07 Mar 2020
Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Wesley J. Maddox
Gregory W. Benton
A. Wilson
136
61
0
04 Mar 2020
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
85
133
0
04 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
186
153
0
02 Mar 2020
Disentangling Adaptive Gradient Methods from Learning Rates
Naman Agarwal
Rohan Anil
Elad Hazan
Tomer Koren
Cyril Zhang
109
38
0
26 Feb 2020
The role of regularization in classification of high-dimensional noisy Gaussian mixture
Francesca Mignacco
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
56
90
0
26 Feb 2020
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Zitong Yang
Yaodong Yu
Chong You
Jacob Steinhardt
Yi-An Ma
83
186
0
26 Feb 2020
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization
Yifei Min
Lin Chen
Amin Karbasi
AAML
103
69
0
25 Feb 2020
Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors
Yehuda Dar
Paul Mayer
Lorenzo Luzi
Richard G. Baraniuk
135
17
0
25 Feb 2020
Precise Tradeoffs in Adversarial Training for Linear Regression
Adel Javanmard
Mahdi Soltanolkotabi
Hamed Hassani
AAML
66
109
0
24 Feb 2020
Previous
1
2
3
4
5
Next