Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1707.04822
Cited By
Block-Normalized Gradient Method: An Empirical Study for Training Deep Neural Network
16 July 2017
Adams Wei Yu
Lei Huang
Qihang Lin
Ruslan Salakhutdinov
J. Carbonell
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Block-Normalized Gradient Method: An Empirical Study for Training Deep Neural Network"
9 / 9 papers shown
Title
The Power of Normalization: Faster Evasion of Saddle Points
Kfir Y. Levy
65
108
0
15 Nov 2016
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
Tim Salimans
Diederik P. Kingma
ODL
167
1,940
0
25 Feb 2016
Beyond Convexity: Stochastic Quasi-Convex Optimization
Elad Hazan
Kfir Y. Levy
Shai Shalev-Shwartz
60
176
0
08 Jul 2015
Path-SGD: Path-Normalized Optimization in Deep Neural Networks
Behnam Neyshabur
Ruslan Salakhutdinov
Nathan Srebro
ODL
75
307
0
08 Jun 2015
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
415
43,234
0
11 Feb 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.4K
149,842
0
22 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.3K
100,213
0
04 Sep 2014
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
593
13,416
0
25 Aug 2014
On the difficulty of training Recurrent Neural Networks
Razvan Pascanu
Tomas Mikolov
Yoshua Bengio
ODL
182
5,334
0
21 Nov 2012
1