Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.09962
Cited By
Randomized Sharpness-Aware Training for Boosting Computational Efficiency in Deep Learning
18 March 2022
Yang Zhao
Hao Zhang
Xiuyuan Hu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Randomized Sharpness-Aware Training for Boosting Computational Efficiency in Deep Learning"
6 / 6 papers shown
Title
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach
Peng Mi
Li Shen
Tianhe Ren
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
Dacheng Tao
AAML
92
71
0
11 Oct 2022
Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri
H. Mobahi
Yi Tay
152
103
0
16 Oct 2021
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks
Jungmin Kwon
Jeongseop Kim
Hyunseong Park
I. Choi
84
289
0
23 Feb 2021
Sharpness-Aware Minimization for Efficiently Improving Generalization
Pierre Foret
Ariel Kleiner
H. Mobahi
Behnam Neyshabur
AAML
184
1,342
0
03 Oct 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
392
2,934
0
15 Sep 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
304
7,971
0
23 May 2016
1