ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.05170
  4. Cited By
What causes the test error? Going beyond bias-variance via ANOVA

What causes the test error? Going beyond bias-variance via ANOVA

11 October 2020
Licong Lin
Yan Sun
ArXivPDFHTML

Papers citing "What causes the test error? Going beyond bias-variance via ANOVA"

46 / 46 papers shown
Title
Understanding Model Ensemble in Transferable Adversarial Attack
Understanding Model Ensemble in Transferable Adversarial Attack
Wei Yao
Zeliang Zhang
Huayi Tang
Yong Liu
91
3
0
09 Oct 2024
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Yan Sun
MLT
84
22
0
11 Oct 2023
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
81
93
0
04 Nov 2020
Memorizing without overfitting: Bias, variance, and interpolation in
  over-parameterized models
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
82
42
0
26 Oct 2020
The Neural Tangent Kernel in High Dimensions: Triple Descent and a
  Multi-Scale Theory of Generalization
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam
Jeffrey Pennington
49
125
0
15 Aug 2020
Provable More Data Hurt in High Dimensional Least Squares Estimator
Provable More Data Hurt in High Dimensional Least Squares Estimator
Zeng Li
Chuanlong Xie
Qinwen Wang
59
6
0
14 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
81
61
0
03 Aug 2020
Deep Isometric Learning for Visual Recognition
Deep Isometric Learning for Visual Recognition
Haozhi Qi
Chong You
Xinyu Wang
Yi-An Ma
Jitendra Malik
VLM
65
55
0
30 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
70
122
0
10 Jun 2020
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
  Kernel, a Precise Phase Transition, and the Corresponding Double Descent
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent
Zhenyu Liao
Romain Couillet
Michael W. Mahoney
76
90
0
09 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
771
42,055
0
28 May 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for
  linear-width neural networks
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
105
73
0
25 May 2020
A Brief Prehistory of Double Descent
A Brief Prehistory of Double Descent
Marco Loog
T. Viering
A. Mey
Jesse H. Krijthe
David Tax
49
69
0
07 Apr 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
81
133
0
04 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
154
152
0
02 Mar 2020
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Zitong Yang
Yaodong Yu
Chong You
Jacob Steinhardt
Yi-An Ma
67
185
0
26 Feb 2020
Generalisation error in learning with random features and the hidden
  manifold model
Generalisation error in learning with random features and the hidden manifold model
Federica Gerace
Bruno Loureiro
Florent Krzakala
M. Mézard
Lenka Zdeborová
67
169
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
64
83
0
19 Feb 2020
Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear
  Networks
Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
Wei Hu
Lechao Xiao
Jeffrey Pennington
64
113
0
16 Jan 2020
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
47
68
0
16 Dec 2019
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
59
78
0
10 Dec 2019
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
121
942
0
04 Dec 2019
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep
  Learning
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning
Ben Adlam
J. Levinson
Jeffrey Pennington
59
25
0
02 Dec 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
82
146
0
13 Nov 2019
Ridge Regression: Structure, Cross-Validation, and Sketching
Ridge Regression: Structure, Cross-Validation, and Sketching
Sifan Liu
Yan Sun
CML
65
48
0
06 Oct 2019
Modelling the influence of data structure on learning in neural
  networks: the hidden manifold model
Modelling the influence of data structure on learning in neural networks: the hidden manifold model
Sebastian Goldt
M. Mézard
Florent Krzakala
Lenka Zdeborová
BDL
60
51
0
25 Sep 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
83
635
0
14 Aug 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
80
776
0
26 Jun 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
54
243
0
27 Apr 2019
WONDER: Weighted one-shot distributed ridge regression in high
  dimensions
WONDER: Weighted one-shot distributed ridge regression in high dimensions
Yan Sun
Yueqi Sheng
OffRL
67
51
0
22 Mar 2019
Harmless interpolation of noisy data in regression
Harmless interpolation of noisy data in regression
Vidya Muthukumar
Kailas Vodrahalli
Vignesh Subramanian
A. Sahai
74
202
0
21 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
188
743
0
19 Mar 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
96
374
0
18 Mar 2019
Scaling description of generalization with number of parameters in deep
  learning
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
Matthieu Wyart
83
195
0
06 Jan 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
232
1,650
0
28 Dec 2018
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
Brady Neal
Sarthak Mittal
A. Baratin
Vinayak Tantia
Matthew Scicluna
Simon Lacoste-Julien
Ioannis Mitliagkas
80
167
0
19 Oct 2018
Distributed linear regression by averaging
Distributed linear regression by averaging
Yan Sun
Yueqi Sheng
FedML
59
65
0
30 Sep 2018
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Tengyuan Liang
Alexander Rakhlin
62
354
0
01 Aug 2018
Overfitting or perfect fitting? Risk bounds for classification and
  regression rules that interpolate
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
M. Belkin
Daniel J. Hsu
P. Mitra
AI4CE
138
258
0
13 Jun 2018
On the Spectrum of Random Features Maps of High Dimensional Data
On the Spectrum of Random Features Maps of High Dimensional Data
Zhenyu Liao
Romain Couillet
57
51
0
30 May 2018
Optimal ridge penalty for real-world high-dimensional data can be zero
  or negative due to the implicit ridge regularization
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
56
89
0
28 May 2018
High-dimensional dynamics of generalization error in neural networks
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
137
469
0
10 Oct 2017
A Random Matrix Approach to Neural Networks
A Random Matrix Approach to Neural Networks
Cosme Louart
Zhenyu Liao
Romain Couillet
65
161
0
17 Feb 2017
A Large Dimensional Analysis of Least Squares Support Vector Machines
A Large Dimensional Analysis of Least Squares Support Vector Machines
Zhenyu Liao
Romain Couillet
55
42
0
11 Jan 2017
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
340
7,985
0
23 May 2016
High-Dimensional Asymptotics of Prediction: Ridge Regression and
  Classification
High-Dimensional Asymptotics of Prediction: Ridge Regression and Classification
Yan Sun
Stefan Wager
108
287
0
10 Jul 2015
1