ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.09471
  4. Cited By
Does data interpolation contradict statistical optimality?

Does data interpolation contradict statistical optimality?

25 June 2018
M. Belkin
Alexander Rakhlin
Alexandre B. Tsybakov
ArXivPDFHTML

Papers citing "Does data interpolation contradict statistical optimality?"

50 / 52 papers shown
Title
Supervised Kernel Thinning
Supervised Kernel Thinning
Albert Gong
Kyuseong Choi
Raaz Dwivedi
31
0
0
17 Oct 2024
Analysis of the rate of convergence of an over-parametrized
  convolutional neural network image classifier learned by gradient descent
Analysis of the rate of convergence of an over-parametrized convolutional neural network image classifier learned by gradient descent
Michael Kohler
A. Krzyżak
Benjamin Walter
36
1
0
13 May 2024
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Aaron Mishkin
Mert Pilanci
Mark Schmidt
64
1
0
03 Apr 2024
Analysis of the expected $L_2$ error of an over-parametrized deep neural
  network estimate learned by gradient descent without regularization
Analysis of the expected L2L_2L2​ error of an over-parametrized deep neural network estimate learned by gradient descent without regularization
Selina Drews
Michael Kohler
36
3
0
24 Nov 2023
A decorrelation method for general regression adjustment in randomized
  experiments
A decorrelation method for general regression adjustment in randomized experiments
Fangzhou Su
Wenlong Mou
Peng Ding
Martin J. Wainwright
24
1
0
16 Nov 2023
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
Zhiwei Xu
Yutong Wang
Spencer Frei
Gal Vardi
Wei Hu
MLT
28
24
0
04 Oct 2023
A Neural Collapse Perspective on Feature Evolution in Graph Neural
  Networks
A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks
Vignesh Kothapalli
Tom Tirer
Joan Bruna
38
11
0
04 Jul 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
35
14
0
23 May 2023
Towards understanding neural collapse in supervised contrastive learning
  with the information bottleneck method
Towards understanding neural collapse in supervised contrastive learning with the information bottleneck method
Siwei Wang
S. Palmer
32
2
0
19 May 2023
A Simple Algorithm For Scaling Up Kernel Methods
A Simple Algorithm For Scaling Up Kernel Methods
Tengyu Xu
Bryan Kelly
Semyon Malamud
21
0
0
26 Jan 2023
Strong inductive biases provably prevent harmless interpolation
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
35
9
0
18 Jan 2023
Learning Lipschitz Functions by GD-trained Shallow Overparameterized
  ReLU Neural Networks
Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks
Ilja Kuzborskij
Csaba Szepesvári
21
4
0
28 Dec 2022
Private optimization in the interpolation regime: faster rates and
  hardness results
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi
Karan N. Chadha
Gary Cheng
John C. Duchi
47
5
0
31 Oct 2022
Perturbation Analysis of Neural Collapse
Perturbation Analysis of Neural Collapse
Tom Tirer
Haoxiang Huang
Jonathan Niles-Weed
AAML
46
24
0
29 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
23
8
0
19 Sep 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
26
37
0
14 Jul 2022
Benefits of Additive Noise in Composing Classes with Bounded Capacity
Benefits of Additive Noise in Composing Classes with Bounded Capacity
A. F. Pour
H. Ashtiani
33
3
0
14 Jun 2022
Benefit of Interpolation in Nearest Neighbor Algorithms
Benefit of Interpolation in Nearest Neighbor Algorithms
Yue Xing
Qifan Song
Guang Cheng
14
28
0
23 Feb 2022
HARFE: Hard-Ridge Random Feature Expansion
HARFE: Hard-Ridge Random Feature Expansion
Esha Saha
Hayden Schaeffer
Giang Tran
38
14
0
06 Feb 2022
Model, sample, and epoch-wise descents: exact solution of gradient flow
  in the random feature model
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
37
13
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
37
12
0
21 Oct 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
34
71
0
06 Sep 2021
Knowledge Distillation as Semiparametric Inference
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
40
31
0
20 Apr 2021
Label-Imbalanced and Group-Sensitive Classification under
  Overparameterization
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
29
93
0
02 Mar 2021
SVRG Meets AdaGrad: Painless Variance Reduction
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
SGD Generalizes Better Than GD (And Regularization Doesn't Help)
SGD Generalizes Better Than GD (And Regularization Doesn't Help)
I Zaghloul Amir
Tomer Koren
Roi Livni
21
46
0
01 Feb 2021
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
35
50
0
16 Dec 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
39
93
0
04 Nov 2020
Prevalence of Neural Collapse during the terminal phase of deep learning
  training
Prevalence of Neural Collapse during the terminal phase of deep learning training
Vardan Papyan
Xuemei Han
D. Donoho
35
549
0
18 Aug 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
46
440
0
09 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
28
61
0
03 Aug 2020
How benign is benign overfitting?
How benign is benign overfitting?
Amartya Sanyal
P. Dokania
Varun Kanade
Philip Torr
NoLa
AAML
23
57
0
08 Jul 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
28
3
0
19 Jun 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
36
32
0
18 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
Ernesto De Vito
Lorenzo Rosasco
44
8
0
17 Jun 2020
Post-Estimation Smoothing: A Simple Baseline for Learning with Side
  Information
Post-Estimation Smoothing: A Simple Baseline for Learning with Side Information
Esther Rolf
Michael I. Jordan
Benjamin Recht
11
6
0
12 Mar 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
146
201
0
07 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
35
68
0
05 Feb 2020
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
27
77
0
10 Dec 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
36
145
0
13 Nov 2019
The Role of Neural Network Activation Functions
The Role of Neural Network Activation Functions
Rahul Parhi
Robert D. Nowak
26
12
0
05 Oct 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
60
626
0
14 Aug 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
8
762
0
26 Jun 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
49
481
0
12 Jun 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
18
241
0
27 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
47
351
0
27 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
60
1,610
0
28 Dec 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
26
117
0
17 Oct 2018
Stochastic (Approximate) Proximal Point Methods: Convergence,
  Optimality, and Adaptivity
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
Hilal Asi
John C. Duchi
21
123
0
12 Oct 2018
12
Next