Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1908.05355
Cited By
v1
v2
v3
v4
v5 (latest)
The generalization error of random features regression: Precise asymptotics and double descent curve
14 August 2019
Song Mei
Andrea Montanari
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The generalization error of random features regression: Precise asymptotics and double descent curve"
50 / 227 papers shown
Title
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
117
72
0
06 Sep 2021
When and how epochwise double descent happens
Cory Stephenson
Tyler Lee
82
15
0
26 Aug 2021
A spectral-based analysis of the separation between two-layer neural networks and linear methods
Lei Wu
Jihao Long
119
8
0
10 Aug 2021
Taxonomizing local versus global structure in neural network loss landscapes
Yaoqing Yang
Liam Hodgkinson
Ryan Theisen
Joe Zou
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
111
37
0
23 Jul 2021
Random feature neural networks learn Black-Scholes type PDEs without curse of dimensionality
Lukas Gonon
77
37
0
14 Jun 2021
Curiously Effective Features for Image Quality Prediction
S. Becker
Thomas Wiegand
S. Bosse
37
4
0
10 Jun 2021
Probing transfer learning with a model of synthetic correlated datasets
Federica Gerace
Luca Saglietti
Stefano Sarao Mannelli
Andrew M. Saxe
Lenka Zdeborová
OOD
56
32
0
09 Jun 2021
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
Huiyuan Wang
Wei Lin
MLT
41
4
0
09 Jun 2021
Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models
Courtney Paquette
Elliot Paquette
ODL
98
14
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi Zhou
Arthur Gretton
MLT
105
35
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
84
9
0
04 Jun 2021
Double Descent Optimization Pattern and Aliasing: Caveats of Noisy Labels
Florian Dubost
Erin Hong
Max Pike
Siddharth Sharma
Siyi Tang
Nandita Bhaskhar
Christopher Lee-Messer
D. Rubin
NoLa
47
0
0
03 Jun 2021
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
88
85
0
31 May 2021
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
M. Belkin
70
186
0
29 May 2021
Support vector machines and linear regression coincide with very high-dimensional features
Navid Ardeshir
Clayton Sanford
Daniel J. Hsu
61
23
0
28 May 2021
A Universal Law of Robustness via Isoperimetry
Sébastien Bubeck
Mark Sellke
55
218
0
26 May 2021
A Geometric Analysis of Neural Collapse with Unconstrained Features
Zhihui Zhu
Tianyu Ding
Jinxin Zhou
Xiao Li
Chong You
Jeremias Sulam
Qing Qu
88
204
0
06 May 2021
AdaBoost and robust one-bit compressed sensing
Geoffrey Chinot
Felix Kuchelmeister
Matthias Löffler
Sara van de Geer
82
5
0
05 May 2021
How rotational invariance of common kernels prevents generalization in high dimensions
Konstantin Donhauser
Mingqi Wu
Fanny Yang
80
24
0
09 Apr 2021
Trees, Forests, Chickens, and Eggs: When and Why to Prune Trees in a Random Forest
Siyu Zhou
L. Mentch
55
22
0
30 Mar 2021
Lower Bounds on the Generalization Error of Nonlinear Learning Models
Inbar Seroussi
Ofer Zeitouni
71
5
0
26 Mar 2021
The Geometry of Over-parameterized Regression and Adversarial Perturbations
J. Rocks
Pankaj Mehta
AAML
58
8
0
25 Mar 2021
On the interplay between data structure and loss function in classification problems
Stéphane dÁscoli
Marylou Gabrié
Levent Sagun
Giulio Biroli
96
17
0
09 Mar 2021
Asymptotics of Ridge Regression in Convolutional Models
Mojtaba Sahraee-Ardakan
Tung Mai
Anup B. Rao
Ryan Rossi
S. Rangan
A. Fletcher
MLT
42
2
0
08 Mar 2021
Generalization Bounds for Sparse Random Feature Expansions
Abolfazl Hashemi
Hayden Schaeffer
Robert Shi
Ufuk Topcu
Giang Tran
Rachel A. Ward
MLT
138
42
0
04 Mar 2021
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
129
96
0
02 Mar 2021
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Ryumei Nakada
Masaaki Imaizumi
49
2
0
28 Feb 2021
Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed
Maria Refinetti
Sebastian Goldt
Florent Krzakala
Lenka Zdeborová
85
74
0
23 Feb 2021
Adversarially Robust Kernel Smoothing
Jia-Jie Zhu
Christina Kouridi
Yassine Nemmour
Bernhard Schölkopf
64
7
0
16 Feb 2021
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
116
140
0
16 Feb 2021
Double-descent curves in neural networks: a new perspective using Gaussian processes
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
106
7
0
14 Feb 2021
Appearance of Random Matrix Theory in Deep Learning
Nicholas P. Baskerville
Diego Granziol
J. Keating
77
11
0
12 Feb 2021
Explaining Neural Scaling Laws
Yasaman Bahri
Ethan Dyer
Jared Kaplan
Jaehoon Lee
Utkarsh Sharma
97
270
0
12 Feb 2021
SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality
Courtney Paquette
Kiwon Lee
Fabian Pedregosa
Elliot Paquette
59
35
0
08 Feb 2021
Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration
Song Mei
Theodor Misiakiewicz
Andrea Montanari
105
113
0
26 Jan 2021
Phases of learning dynamics in artificial neural networks: with or without mislabeled data
Yu Feng
Y. Tu
39
2
0
16 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
55
18
0
15 Jan 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
87
83
0
21 Dec 2020
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
86
51
0
16 Dec 2020
Avoiding The Double Descent Phenomenon of Random Feature Models Using Hybrid Regularization
Kelvin K. Kan
J. Nagy
Lars Ruthotto
AI4CE
68
6
0
11 Dec 2020
Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately
Fereshte Khani
Percy Liang
FaML
61
66
0
07 Dec 2020
Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization
Qianyi Li
H. Sompolinsky
178
73
0
07 Dec 2020
Solvable Model for Inheriting the Regularization through Knowledge Distillation
Luca Saglietti
Lenka Zdeborová
53
20
0
01 Dec 2020
On Generalization of Adaptive Methods for Over-parameterized Linear Regression
Vatsal Shah
Soumya Basu
Anastasios Kyrillidis
Sujay Sanghavi
AI4CE
51
4
0
28 Nov 2020
Dimensionality reduction, regularization, and generalization in overparameterized regressions
Ningyuan Huang
D. Hogg
Soledad Villar
86
15
0
23 Nov 2020
Sparse sketches with small inversion bias
Michal Derezinski
Zhenyu Liao
Yan Sun
Michael W. Mahoney
107
22
0
21 Nov 2020
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
158
269
0
18 Nov 2020
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and Regularization
Ke Wang
Christos Thrampoulidis
95
29
0
18 Nov 2020
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
171
688
0
06 Nov 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
121
93
0
04 Nov 2020
Previous
1
2
3
4
5
Next