ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.11118
  4. Cited By
Reconciling modern machine learning practice and the bias-variance
  trade-off

Reconciling modern machine learning practice and the bias-variance trade-off

28 December 2018
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
ArXivPDFHTML

Papers citing "Reconciling modern machine learning practice and the bias-variance trade-off"

50 / 313 papers shown
Title
Problem-Dependent Power of Quantum Neural Networks on Multi-Class
  Classification
Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification
Yuxuan Du
Yibo Yang
Dacheng Tao
Min-hsiu Hsieh
41
23
0
29 Dec 2022
The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
  Deep Quantum Machine Learning
The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for Deep Quantum Machine Learning
Massimiliano Incudini
Michele Grossi
Antonio Mandarino
S. Vallecorsa
Alessandra Di Pierro
David Windridge
33
6
0
22 Dec 2022
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
Reliable extrapolation of deep neural operators informed by physics or
  sparse observations
Reliable extrapolation of deep neural operators informed by physics or sparse observations
Min Zhu
Handi Zhang
Anran Jiao
George Karniadakis
Lu Lu
50
91
0
13 Dec 2022
Tight bounds for maximum $\ell_1$-margin classifiers
Tight bounds for maximum ℓ1\ell_1ℓ1​-margin classifiers
Stefan Stojanovic
Konstantin Donhauser
Fanny Yang
40
0
0
07 Dec 2022
High Dimensional Binary Classification under Label Shift: Phase
  Transition and Regularization
High Dimensional Binary Classification under Label Shift: Phase Transition and Regularization
Jiahui Cheng
Minshuo Chen
Hao Liu
Tuo Zhao
Wenjing Liao
36
0
0
01 Dec 2022
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Task Discovery: Finding the Tasks that Neural Networks Generalize on
Andrei Atanov
Andrei Filatov
Teresa Yeo
Ajay Sohmshetty
Amir Zamir
OOD
45
10
0
01 Dec 2022
Why Neural Networks Work
Why Neural Networks Work
Sayan Mukherjee
Bernardo A. Huberman
13
2
0
26 Nov 2022
The Vanishing Decision Boundary Complexity and the Strong First
  Component
The Vanishing Decision Boundary Complexity and the Strong First Component
Hengshuai Yao
UQCV
33
0
0
25 Nov 2022
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not
  Lead to Better Performance
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not Lead to Better Performance
Marco Loog
T. Viering
26
1
0
25 Nov 2022
Understanding the double descent curve in Machine Learning
Understanding the double descent curve in Machine Learning
Luis Sa-Couto
J. M. Ramos
Miguel Almeida
Andreas Wichert
35
1
0
18 Nov 2022
Emergence of Concepts in DNNs?
Emergence of Concepts in DNNs?
Tim Räz
21
0
0
11 Nov 2022
Do highly over-parameterized neural networks generalize since bad
  solutions are rare?
Do highly over-parameterized neural networks generalize since bad solutions are rare?
Julius Martinetz
T. Martinetz
30
1
0
07 Nov 2022
Reward-Predictive Clustering
Reward-Predictive Clustering
Lucas Lehnert
M. Frank
Michael L. Littman
OffRL
22
0
0
07 Nov 2022
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal Transport
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
22
6
0
02 Nov 2022
Globally Gated Deep Linear Networks
Globally Gated Deep Linear Networks
Qianyi Li
H. Sompolinsky
AI4CE
27
10
0
31 Oct 2022
A Law of Data Separation in Deep Learning
A Law of Data Separation in Deep Learning
Hangfeng He
Weijie J. Su
OOD
24
37
0
31 Oct 2022
A Solvable Model of Neural Scaling Laws
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
36
51
0
30 Oct 2022
Grokking phase transitions in learning local rules with gradient descent
Grokking phase transitions in learning local rules with gradient descent
Bojan Žunkovič
E. Ilievski
63
16
0
26 Oct 2022
Second-order regression models exhibit progressive sharpening to the
  edge of stability
Second-order regression models exhibit progressive sharpening to the edge of stability
Atish Agarwala
Fabian Pedregosa
Jeffrey Pennington
35
26
0
10 Oct 2022
Goal Misgeneralization: Why Correct Specifications Aren't Enough For
  Correct Goals
Goal Misgeneralization: Why Correct Specifications Aren't Enough For Correct Goals
Rohin Shah
Vikrant Varma
Ramana Kumar
Mary Phuong
Victoria Krakovna
J. Uesato
Zachary Kenton
37
68
0
04 Oct 2022
Block-wise Training of Residual Networks via the Minimizing Movement
  Scheme
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
33
1
0
03 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
Why neural networks find simple solutions: the many regularizers of
  geometric complexity
Why neural networks find simple solutions: the many regularizers of geometric complexity
Benoit Dherin
Michael Munn
M. Rosca
David Barrett
55
31
0
27 Sep 2022
In-context Learning and Induction Heads
In-context Learning and Induction Heads
Catherine Olsson
Nelson Elhage
Neel Nanda
Nicholas Joseph
Nova Dassarma
...
Tom B. Brown
Jack Clark
Jared Kaplan
Sam McCandlish
C. Olah
250
463
0
24 Sep 2022
Deep Double Descent via Smooth Interpolation
Deep Double Descent via Smooth Interpolation
Matteo Gamba
Erik Englesson
Marten Bjorkman
Hossein Azizpour
63
11
0
21 Sep 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
23
8
0
19 Sep 2022
Lazy vs hasty: linearization in deep networks impacts learning schedule
  based on example difficulty
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George
Guillaume Lajoie
A. Baratin
31
5
0
19 Sep 2022
Importance Tempering: Group Robustness for Overparameterized Models
Importance Tempering: Group Robustness for Overparameterized Models
Yiping Lu
Wenlong Ji
Zachary Izzo
Lexing Ying
42
7
0
19 Sep 2022
Random Fourier Features for Asymmetric Kernels
Random Fourier Features for Asymmetric Kernels
Ming-qian He
Fan He
Fanghui Liu
Xiaolin Huang
28
3
0
18 Sep 2022
Towards Understanding the Overfitting Phenomenon of Deep Click-Through
  Rate Prediction Models
Towards Understanding the Overfitting Phenomenon of Deep Click-Through Rate Prediction Models
Zhaorui Zhang
Xiang-Rong Sheng
Yujing Zhang
Biye Jiang
Shuguang Han
Hongbo Deng
Bo Zheng
CML
33
36
0
04 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
24
6
0
27 Aug 2022
On the Implicit Bias in Deep-Learning Algorithms
On the Implicit Bias in Deep-Learning Algorithms
Gal Vardi
FedML
AI4CE
34
72
0
26 Aug 2022
Intersection of Parallels as an Early Stopping Criterion
Intersection of Parallels as an Early Stopping Criterion
Ali Vardasbi
Maarten de Rijke
Mostafa Dehghani
MoMe
38
5
0
19 Aug 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
29
451
0
01 Aug 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully
  Connected Neural Networks
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
21
0
0
25 Jul 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Towards Multimodal Vision-Language Models Generating Non-Generic Text
Towards Multimodal Vision-Language Models Generating Non-Generic Text
Wes Robbins
Zanyar Zohourianshahzadi
Jugal Kalita
14
1
0
09 Jul 2022
Target alignment in truncated kernel ridge regression
Target alignment in truncated kernel ridge regression
Arash A. Amini
R. Baumgartner
Dai Feng
14
3
0
28 Jun 2022
Tensor-on-Tensor Regression: Riemannian Optimization,
  Over-parameterization, Statistical-computational Gap, and Their Interplay
Tensor-on-Tensor Regression: Riemannian Optimization, Over-parameterization, Statistical-computational Gap, and Their Interplay
Yuetian Luo
Anru R. Zhang
29
19
0
17 Jun 2022
Fast Finite Width Neural Tangent Kernel
Fast Finite Width Neural Tangent Kernel
Roman Novak
Jascha Narain Sohl-Dickstein
S. Schoenholz
AAML
22
53
0
17 Jun 2022
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
77
27
0
17 Jun 2022
Data-Efficient Brain Connectome Analysis via Multi-Task Meta-Learning
Data-Efficient Brain Connectome Analysis via Multi-Task Meta-Learning
Yi Yang
Yanqiao Zhu
Hejie Cui
Xuan Kan
Lifang He
Ying Guo
Carl Yang
36
30
0
09 Jun 2022
Trajectory-dependent Generalization Bounds for Deep Neural Networks via
  Fractional Brownian Motion
Trajectory-dependent Generalization Bounds for Deep Neural Networks via Fractional Brownian Motion
Chengli Tan
Jiang Zhang
Junmin Liu
40
1
0
09 Jun 2022
Neural Collapse: A Review on Modelling Principles and Generalization
Neural Collapse: A Review on Modelling Principles and Generalization
Vignesh Kothapalli
25
74
0
08 Jun 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
30
11
0
03 Jun 2022
Robust Weight Perturbation for Adversarial Training
Robust Weight Perturbation for Adversarial Training
Chaojian Yu
Bo Han
Biwei Huang
Li Shen
Shiming Ge
Bo Du
Tongliang Liu
AAML
22
33
0
30 May 2022
A Blessing of Dimensionality in Membership Inference through
  Regularization
A Blessing of Dimensionality in Membership Inference through Regularization
Jasper Tan
Daniel LeJeune
Blake Mason
Hamid Javadi
Richard G. Baraniuk
32
18
0
27 May 2022
Why Robust Generalization in Deep Learning is Difficult: Perspective of
  Expressive Power
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
Binghui Li
Jikai Jin
Han Zhong
J. Hopcroft
Liwei Wang
OOD
82
27
0
27 May 2022
Symmetry Teleportation for Accelerated Optimization
Symmetry Teleportation for Accelerated Optimization
B. Zhao
Nima Dehmamy
Robin Walters
Rose Yu
ODL
23
20
0
21 May 2022
Previous
1234567
Next