Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.12915
Cited By
The Implicit and Explicit Regularization Effects of Dropout
28 February 2020
Colin Wei
Sham Kakade
Tengyu Ma
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Implicit and Explicit Regularization Effects of Dropout"
22 / 22 papers shown
Title
High-order Regularization for Machine Learning and Learning-based Control
Xinghua Liu
Ming Cao
23
0
0
13 May 2025
Reasoning Bias of Next Token Prediction Training
Pengxiao Lin
Zhongwang Zhang
Zhi-Qin John Xu
LRM
82
1
0
21 Feb 2025
Agnostic Sharpness-Aware Minimization
Van-Anh Nguyen
Quyen Tran
Tuan Truong
Thanh-Toan Do
Dinh Q. Phung
Trung Le
38
0
0
11 Jun 2024
Why is SAM Robust to Label Noise?
Christina Baek
Zico Kolter
Aditi Raghunathan
NoLa
AAML
41
9
0
06 May 2024
A Hybrid Generative and Discriminative PointNet on Unordered Point Sets
Yang Ye
Shihao Ji
PINN
3DPC
33
0
0
19 Apr 2024
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Hong Liu
Zhiyuan Li
David Leo Wright Hall
Percy Liang
Tengyu Ma
VLM
27
128
0
23 May 2023
Dropout Regularization in Extended Generalized Linear Models based on Double Exponential Families
Benedikt Lutke Schwienhorst
Lucas Kock
David J. Nott
Nadja Klein
13
1
0
11 May 2023
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Hong Liu
Sang Michael Xie
Zhiyuan Li
Tengyu Ma
AI4CE
32
49
0
25 Oct 2022
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
24
3
0
02 Jul 2022
Bridging Model-based Safety and Model-free Reinforcement Learning through System Identification of Low Dimensional Linear Models
Zhongyu Li
Jun Zeng
A. Thirugnanam
K. Sreenath
24
16
0
11 May 2022
A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification From Analytical Augmented Sample Moments
Randall Balestriero
Ishan Misra
Yann LeCun
27
20
0
16 Feb 2022
DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Aviral Kumar
Rishabh Agarwal
Tengyu Ma
Aaron Courville
George Tucker
Sergey Levine
OffRL
26
65
0
09 Dec 2021
On the Importance of Regularisation & Auxiliary Information in OOD Detection
John Mitros
Brian Mac Namee
21
2
0
15 Jul 2021
R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
M. Zhang
Tie-Yan Liu
33
424
0
28 Jun 2021
Regularizing Neural Networks via Adversarial Model Perturbation
Yaowei Zheng
Richong Zhang
Yongyi Mao
AAML
22
95
0
10 Oct 2020
Explicit Regularisation in Gaussian Noise Injections
A. Camuto
M. Willetts
Umut Simsekli
Stephen J. Roberts
Chris Holmes
15
55
0
14 Jul 2020
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Jeff Z. HaoChen
Colin Wei
J. Lee
Tengyu Ma
20
93
0
15 Jun 2020
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Noam Razin
Nadav Cohen
16
155
0
13 May 2020
Dropout: Explicit Forms and Capacity Control
R. Arora
Peter L. Bartlett
Poorya Mianjy
Nathan Srebro
55
37
0
06 Mar 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,136
0
06 Jun 2015
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1