Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1903.02237
Cited By
Positively Scale-Invariant Flatness of ReLU Neural Networks
6 March 2019
Mingyang Yi
Qi Meng
Wei-neng Chen
Zhi-Ming Ma
Tie-Yan Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Positively Scale-Invariant Flatness of ReLU Neural Networks"
8 / 8 papers shown
Title
Local Identifiability of Deep ReLU Neural Networks: the Theory
Joachim Bona-Pellissier
Franccois Malgouyres
F. Bachoc
FAtt
67
6
0
15 Jun 2022
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
40
70
0
14 Jun 2022
An Embedding of ReLU Networks and an Analysis of their Identifiability
Pierre Stock
Rémi Gribonval
31
17
0
20 Jul 2021
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks
Jungmin Kwon
Jeongseop Kim
Hyunseong Park
I. Choi
39
281
0
23 Feb 2021
The Representation Theory of Neural Networks
M. Armenta
Pierre-Marc Jodoin
21
30
0
23 Jul 2020
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
22
168
0
19 Dec 2019
Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization
Xinyan Li
Qilong Gu
Yingxue Zhou
Tiancong Chen
A. Banerjee
ODL
42
51
0
24 Jul 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1