ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16179
  4. Cited By
Dropout Drops Double Descent

Dropout Drops Double Descent

25 May 2023
Tianbao Yang
J. Suzuki
ArXivPDFHTML

Papers citing "Dropout Drops Double Descent"

34 / 34 papers shown
Title
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
81
61
0
03 Aug 2020
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
Reinhard Heckel
Fatih Yilmaz
65
45
0
20 Jul 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
70
122
0
10 Jun 2020
Dropout: Explicit Forms and Capacity Control
Dropout: Explicit Forms and Capacity Control
R. Arora
Peter L. Bartlett
Poorya Mianjy
Nathan Srebro
96
38
0
06 Mar 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
81
133
0
04 Mar 2020
The Implicit and Explicit Regularization Effects of Dropout
The Implicit and Explicit Regularization Effects of Dropout
Colin Wei
Sham Kakade
Tengyu Ma
85
117
0
28 Feb 2020
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
121
942
0
04 Dec 2019
The Implicit Regularization of Ordinary Least Squares Ensembles
The Implicit Regularization of Ordinary Least Squares Ensembles
Daniel LeJeune
Hamid Javadi
Richard G. Baraniuk
97
43
0
10 Oct 2019
Learning Sparse Networks Using Targeted Dropout
Learning Sparse Networks Using Targeted Dropout
Aidan Gomez
Ivan Zhang
Siddhartha Rao Kamalakara
Divyam Madaan
Kevin Swersky
Y. Gal
Geoffrey E. Hinton
71
98
0
31 May 2019
Rethinking the Usage of Batch Normalization and Dropout in the Training
  of Deep Neural Networks
Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks
Guangyong Chen
Pengfei Chen
Yujun Shi
Chang-Yu Hsieh
B. Liao
Shengyu Zhang
OOD
36
80
0
15 May 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
188
743
0
19 Mar 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
96
374
0
18 Mar 2019
Ising-Dropout: A Regularization Method for Training and Compression of
  Deep Neural Networks
Ising-Dropout: A Regularization Method for Training and Compression of Deep Neural Networks
Hojjat Salehinejad
S. Valaee
38
30
0
07 Feb 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
232
1,650
0
28 Dec 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
267
3,203
0
20 Jun 2018
Understanding the Disharmony between Dropout and Batch Normalization by
  Variance Shift
Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift
Xiang Li
Shuo Chen
Xiaolin Hu
Jian Yang
72
309
0
16 Jan 2018
Regularization of Deep Neural Networks with Spectral Dropout
Regularization of Deep Neural Networks with Spectral Dropout
Salman Khan
Munawar Hayat
Fatih Porikli
66
103
0
23 Nov 2017
Adversarial Dropout Regularization
Adversarial Dropout Regularization
Kuniaki Saito
Yoshitaka Ushiku
Tatsuya Harada
Kate Saenko
GAN
72
285
0
05 Nov 2017
High-dimensional dynamics of generalization error in neural networks
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
139
469
0
10 Oct 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,883
0
25 Aug 2017
A Closer Look at Memorization in Deep Networks
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Simon Lacoste-Julien
TDI
125
1,818
0
16 Jun 2017
Concrete Dropout
Concrete Dropout
Y. Gal
Jiri Hron
Alex Kendall
BDL
UQCV
179
592
0
22 May 2017
Information Dropout: Learning Optimal Representations Through Noisy
  Computation
Information Dropout: Learning Optimal Representations Through Noisy Computation
Alessandro Achille
Stefano Soatto
OOD
DRL
SSL
60
400
0
04 Nov 2016
Generalized ridge estimator and model selection criterion in
  multivariate linear regression
Generalized ridge estimator and model selection criterion in multivariate linear regression
Y. Mori
Taiji Suzuki
35
8
0
31 Mar 2016
Improved Dropout for Shallow and Deep Learning
Improved Dropout for Shallow and Deep Learning
Zhe Li
Boqing Gong
Tianbao Yang
BDL
SyDa
85
78
0
06 Feb 2016
Variational Dropout and the Local Reparameterization Trick
Variational Dropout and the Local Reparameterization Trick
Diederik P. Kingma
Tim Salimans
Max Welling
BDL
226
1,514
0
08 Jun 2015
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
821
9,318
0
06 Jun 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
463
43,305
0
11 Feb 2015
On the Inductive Bias of Dropout
On the Inductive Bias of Dropout
D. Helmbold
Philip M. Long
127
71
0
15 Dec 2014
Dropout Rademacher Complexity of Deep Neural Networks
Dropout Rademacher Complexity of Deep Neural Networks
Wei Gao
Zhi Zhou
94
69
0
16 Feb 2014
DeepPose: Human Pose Estimation via Deep Neural Networks
DeepPose: Human Pose Estimation via Deep Neural Networks
Alexander Toshev
Christian Szegedy
3DH
183
2,774
0
17 Dec 2013
Dropout Training as Adaptive Regularization
Dropout Training as Adaptive Regularization
Stefan Wager
Sida I. Wang
Percy Liang
129
599
0
04 Jul 2013
Sharp analysis of low-rank kernel matrix approximations
Sharp analysis of low-rank kernel matrix approximations
Francis R. Bach
161
282
0
09 Aug 2012
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
453
7,663
0
03 Jul 2012
1