Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1908.05355
Cited By
v1
v2
v3
v4
v5 (latest)
The generalization error of random features regression: Precise asymptotics and double descent curve
14 August 2019
Song Mei
Andrea Montanari
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The generalization error of random features regression: Precise asymptotics and double descent curve"
27 / 227 papers shown
Title
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
87
172
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
79
83
0
19 Feb 2020
Asymptotic errors for convex penalized linear regression beyond Gaussian matrices
Cédric Gerbelot
A. Abbara
Florent Krzakala
53
17
0
11 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-
ℓ
1
\ell_1
ℓ
1
-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
133
70
0
05 Feb 2020
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
115
16
0
04 Feb 2020
Analytic Study of Double Descent in Binary Classification: The Impact of Loss
Ganesh Ramachandra Kini
Christos Thrampoulidis
93
52
0
30 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
94
317
0
08 Jan 2020
Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang
Terrance Liu
Liu Ziyin
Nicholas B. Allen
Randy P. Auerbach
David Brent
Ruslan Salakhutdinov
Louis-Philippe Morency
FedML
122
569
0
06 Jan 2020
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
137
169
0
19 Dec 2019
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
80
68
0
16 Dec 2019
Mean-Field Neural ODEs via Relaxed Optimal Control
Jean-François Jabir
D. vSivska
Lukasz Szpruch
MLT
141
38
0
11 Dec 2019
Frivolous Units: Wider Networks Are Not Really That Wide
Stephen Casper
Xavier Boix
Vanessa D’Amario
Ling Guo
Martin Schrimpf
Kasper Vinken
Gabriel Kreiman
55
19
0
10 Dec 2019
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
82
78
0
10 Dec 2019
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
Jeffrey Negrea
Gintare Karolina Dziugaite
Daniel M. Roy
AI4CE
97
65
0
09 Dec 2019
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
131
948
0
04 Dec 2019
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning
Ben Adlam
J. Levinson
Jeffrey Pennington
86
25
0
02 Dec 2019
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
77
123
0
27 Nov 2019
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
117
148
0
13 Nov 2019
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case
Greg Ongie
Rebecca Willett
Daniel Soudry
Nathan Srebro
113
161
0
03 Oct 2019
Modelling the influence of data structure on learning in neural networks: the hidden manifold model
Sebastian Goldt
M. Mézard
Florent Krzakala
Lenka Zdeborová
BDL
80
51
0
25 Sep 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
126
165
0
25 Aug 2019
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
97
243
0
27 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
273
747
0
19 Mar 2019
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
Matthieu Wyart
112
196
0
06 Jan 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
116
38
0
28 Dec 2018
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
153
118
0
17 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
134
201
0
02 Oct 2018
Previous
1
2
3
4
5