ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.08560
  4. Cited By
Surprises in High-Dimensional Ridgeless Least Squares Interpolation

Surprises in High-Dimensional Ridgeless Least Squares Interpolation

19 March 2019
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
ArXivPDFHTML

Papers citing "Surprises in High-Dimensional Ridgeless Least Squares Interpolation"

43 / 143 papers shown
Title
Towards an Understanding of Benign Overfitting in Neural Networks
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
33
35
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random
  features and neural tangent regimes
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
25
9
0
04 Jun 2021
AdaBoost and robust one-bit compressed sensing
AdaBoost and robust one-bit compressed sensing
Geoffrey Chinot
Felix Kuchelmeister
Matthias Löffler
Sara van de Geer
32
5
0
05 May 2021
Generalization Guarantees for Neural Architecture Search with
  Train-Validation Split
Generalization Guarantees for Neural Architecture Search with Train-Validation Split
Samet Oymak
Mingchen Li
Mahdi Soltanolkotabi
AI4CE
OOD
36
13
0
29 Apr 2021
The Shape of Learning Curves: a Review
The Shape of Learning Curves: a Review
T. Viering
Marco Loog
18
122
0
19 Mar 2021
Learning curves of generic features maps for realistic datasets with a
  teacher-student model
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
30
135
0
16 Feb 2021
When and How Mixup Improves Calibration
When and How Mixup Improves Calibration
Linjun Zhang
Zhun Deng
Kenji Kawaguchi
James Zou
UQCV
28
67
0
11 Feb 2021
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
35
50
0
16 Dec 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
37
93
0
04 Nov 2020
Dataset Meta-Learning from Kernel Ridge-Regression
Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen
Zhourung Chen
Jaehoon Lee
DD
36
238
0
30 Oct 2020
Precise High-Dimensional Asymptotics for Quantifying Heterogeneous Transfers
Precise High-Dimensional Asymptotics for Quantifying Heterogeneous Transfers
Fan Yang
Hongyang R. Zhang
Sen Wu
Christopher Ré
Weijie J. Su
58
10
0
22 Oct 2020
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
26
62
0
21 Oct 2020
On the proliferation of support vectors in high dimensions
On the proliferation of support vectors in high dimensions
Daniel J. Hsu
Vidya Muthukumar
Ji Xu
24
42
0
22 Sep 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
23
61
0
03 Aug 2020
The Interpolation Phase Transition in Neural Networks: Memorization and
  Generalization under Lazy Training
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
47
95
0
25 Jul 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip H. S. Torr
NoLa
AAML
23
57
0
08 Jul 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
28
3
0
19 Jun 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
34
32
0
18 Jun 2020
Precise expressions for random projections: Low-rank approximation and
  randomized Newton
Precise expressions for random projections: Low-rank approximation and randomized Newton
Michal Derezinski
Feynman T. Liang
Zhenyu A. Liao
Michael W. Mahoney
32
23
0
18 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
E. De Vito
Lorenzo Rosasco
38
8
0
17 Jun 2020
Double Double Descent: On Generalization Errors in Transfer Learning
  between Linear Regression Tasks
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
Yehuda Dar
Richard G. Baraniuk
36
19
0
12 Jun 2020
To Each Optimizer a Norm, To Each Norm its Generalization
To Each Optimizer a Norm, To Each Norm its Generalization
Sharan Vaswani
Reza Babanezhad
Jose Gallego
Aaron Mishkin
Simon Lacoste-Julien
Nicolas Le Roux
26
8
0
11 Jun 2020
Double Descent Risk and Volume Saturation Effects: A Geometric
  Perspective
Double Descent Risk and Volume Saturation Effects: A Geometric Perspective
Prasad Cheema
M. Sugiyama
6
3
0
08 Jun 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for
  linear-width neural networks
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
44
71
0
25 May 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
41
148
0
16 May 2020
An Investigation of Why Overparameterization Exacerbates Spurious
  Correlations
An Investigation of Why Overparameterization Exacerbates Spurious Correlations
Shiori Sagawa
Aditi Raghunathan
Pang Wei Koh
Percy Liang
152
371
0
09 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
41
172
0
23 Apr 2020
Boolean learning under noise-perturbations in hardware neural networks
Boolean learning under noise-perturbations in hardware neural networks
Louis Andréoli
X. Porte
Stéphane Chrétien
M. Jacquot
L. Larger
Daniel Brunner
15
12
0
27 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Understanding and Mitigating the Tradeoff Between Robustness and
  Accuracy
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
AAML
33
222
0
25 Feb 2020
Generalisation error in learning with random features and the hidden
  manifold model
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
25
165
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
26
82
0
19 Feb 2020
Learning Not to Learn in the Presence of Noisy Labels
Learning Not to Learn in the Presence of Noisy Labels
Liu Ziyin
Blair Chen
Ru Wang
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
Masahito Ueda
NoLa
18
18
0
16 Feb 2020
Asymptotic errors for convex penalized linear regression beyond Gaussian
  matrices
Asymptotic errors for convex penalized linear regression beyond Gaussian matrices
Cédric Gerbelot
A. Abbara
Florent Krzakala
31
16
0
11 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
33
68
0
05 Feb 2020
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
21
77
0
10 Dec 2019
In Defense of Uniform Convergence: Generalization via derandomization
  with an application to interpolating predictors
In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
Jeffrey Negrea
Gintare Karolina Dziugaite
Daniel M. Roy
AI4CE
32
64
0
09 Dec 2019
Improved Sample Complexities for Deep Networks and Robust Classification
  via an All-Layer Margin
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Colin Wei
Tengyu Ma
AAML
OOD
36
85
0
09 Oct 2019
Ridge Regression: Structure, Cross-Validation, and Sketching
Ridge Regression: Structure, Cross-Validation, and Sketching
Sifan Liu
Yan Sun
CML
25
48
0
06 Oct 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
47
626
0
14 Aug 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
21
481
0
12 Jun 2019
Scaling description of generalization with number of parameters in deep
  learning
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
44
195
0
06 Jan 2019
High-dimensional dynamics of generalization error in neural networks
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
52
464
0
10 Oct 2017
Previous
123