Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.09221
Cited By
Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
18 April 2023
Jing An
Jianfeng Lu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks"
8 / 8 papers shown
Title
Convergence of gradient descent for deep neural networks
S. Chatterjee
ODL
47
21
0
30 Mar 2022
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks
Chaoyue Liu
Libin Zhu
M. Belkin
ODL
73
262
0
29 Feb 2020
Unified Optimal Analysis of the (Stochastic) Gradient Method
Sebastian U. Stich
54
113
0
09 Jul 2019
Momentum-Based Variance Reduction in Non-Convex SGD
Ashok Cutkosky
Francesco Orabona
ODL
84
407
0
24 May 2019
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
80
101
0
02 Apr 2019
How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD
Zeyuan Allen-Zhu
ODL
73
171
0
08 Jan 2018
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
172
419
0
16 Jul 2017
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal
Piotr Dollár
Ross B. Girshick
P. Noordhuis
Lukasz Wesolowski
Aapo Kyrola
Andrew Tulloch
Yangqing Jia
Kaiming He
3DH
126
3,681
0
08 Jun 2017
1