Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1903.08560
Cited By
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
19 March 2019
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Surprises in High-Dimensional Ridgeless Least Squares Interpolation"
50 / 139 papers shown
Title
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Yan Sun
Hamed Hassani
21
8
0
31 Jan 2023
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan Kelly
Semyon Malamud
16
0
0
26 Jan 2023
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
High Dimensional Binary Classification under Label Shift: Phase Transition and Regularization
Jiahui Cheng
Minshuo Chen
Hao Liu
Tuo Zhao
Wenjing Liao
36
0
0
01 Dec 2022
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not Lead to Better Performance
Marco Loog
T. Viering
23
1
0
25 Nov 2022
A Consistent Estimator for Confounding Strength
Luca Rendsburg
L. C. Vankadara
D. Ghoshdastidar
U. V. Luxburg
CML
31
2
0
03 Nov 2022
Interpolating Discriminant Functions in High-Dimensional Gaussian Latent Mixtures
Xin Bing
M. Wegkamp
21
1
0
25 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
20
8
0
19 Sep 2022
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George
Guillaume Lajoie
A. Baratin
28
5
0
19 Sep 2022
Generalization Properties of NAS under Activation and Skip Connection Search
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
AI4CE
28
15
0
15 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
24
6
0
27 Aug 2022
Sharp Analysis of Sketch-and-Project Methods via a Connection to Randomized Singular Value Decomposition
Michal Derezinski
E. Rebrova
27
16
0
20 Aug 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Target alignment in truncated kernel ridge regression
Arash A. Amini
R. Baumgartner
Dai Feng
14
3
0
28 Jun 2022
Provable Generalization of Overparameterized Meta-learning Trained with SGD
Yu Huang
Yingbin Liang
Longbo Huang
MLT
26
8
0
18 Jun 2022
Beyond Ridge Regression for Distribution-Free Data
Koby Bibas
M. Feder
17
0
0
17 Jun 2022
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
A Blessing of Dimensionality in Membership Inference through Regularization
Jasper Tan
Daniel LeJeune
Blake Mason
Hamid Javadi
Richard G. Baraniuk
32
18
0
27 May 2022
Proximal Estimation and Inference
Alberto Quaini
F. Trojani
21
1
0
26 May 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
51
15
0
13 May 2022
An Equivalence Principle for the Spectrum of Random Inner-Product Kernel Matrices with Polynomial Scalings
Yue M. Lu
H. Yau
24
24
0
12 May 2022
Training-conditional coverage for distribution-free predictive inference
Michael Bian
Rina Foygel Barber
34
25
0
07 May 2022
Benign Overfitting in Time Series Linear Models with Over-Parameterization
Shogo H. Nakakita
Masaaki Imaizumi
AI4TS
27
5
0
18 Apr 2022
Concentration of Random Feature Matrices in High-Dimensions
Zhijun Chen
Hayden Schaeffer
Rachel A. Ward
22
6
0
14 Apr 2022
Convergence of gradient descent for deep neural networks
S. Chatterjee
ODL
21
20
0
30 Mar 2022
Generalization Through The Lens Of Leave-One-Out Error
Gregor Bachmann
Thomas Hofmann
Aurelien Lucchi
49
7
0
07 Mar 2022
Estimation under Model Misspecification with Fake Features
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
19
11
0
07 Mar 2022
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
C. Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Deep Ensembles Work, But Are They Necessary?
Taiga Abe
E. Kelly Buchanan
Geoff Pleiss
R. Zemel
John P. Cunningham
OOD
UQCV
41
59
0
14 Feb 2022
Exact Solutions of a Deep Linear Network
Liu Ziyin
Botao Li
Xiangmin Meng
ODL
19
21
0
10 Feb 2022
HARFE: Hard-Ridge Random Feature Expansion
Esha Saha
Hayden Schaeffer
Giang Tran
38
14
0
06 Feb 2022
Benign Overfitting in Adversarially Robust Linear Classification
Jinghui Chen
Yuan Cao
Quanquan Gu
AAML
SILM
31
10
0
31 Dec 2021
Over-Parametrized Matrix Factorization in the Presence of Spurious Stationary Points
Armin Eftekhari
24
1
0
25 Dec 2021
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
78
9
0
07 Dec 2021
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
38
1
0
07 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
35
12
0
21 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILM
AAML
36
1
0
27 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
29
71
0
06 Sep 2021
Interpolation can hurt robust generalization even when there is no noise
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
31
14
0
05 Aug 2021
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks
Adityanarayanan Radhakrishnan
George Stefanakis
M. Belkin
Caroline Uhler
30
25
0
31 Jul 2021
The loss landscape of deep linear neural networks: a second-order analysis
E. M. Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
24
9
0
28 Jul 2021
Can we globally optimize cross-validation loss? Quasiconvexity in ridge regression
William T. Stephenson
Zachary Frangella
Madeleine Udell
Tamara Broderick
14
12
0
19 Jul 2021
A Theoretical Analysis of Fine-tuning with Linear Teachers
Gal Shachaf
Alon Brutzkus
Amir Globerson
31
17
0
04 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
24
43
0
04 Jul 2021
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting
Frederic Koehler
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
29
55
0
17 Jun 2021
Double Descent and Other Interpolation Phenomena in GANs
Lorenzo Luzi
Yehuda Dar
Richard Baraniuk
23
5
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
33
35
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
25
9
0
04 Jun 2021
Previous
1
2
3
Next