Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.00247
Cited By
Sharp Analysis for Nonconvex SGD Escaping from Saddle Points
1 February 2019
Cong Fang
Zhouchen Lin
Tong Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sharp Analysis for Nonconvex SGD Escaping from Saddle Points"
24 / 24 papers shown
Title
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent Algorithm
Jinwei Zhao
Marco Gori
Alessandro Betti
S. Melacci
Hongtao Zhang
Jiedong Liu
Xinhong Hei
33
0
0
10 Sep 2024
Random Scaling and Momentum for Non-smooth Non-convex Optimization
Qinzi Zhang
Ashok Cutkosky
43
4
0
16 May 2024
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
19
1
0
15 Feb 2023
The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines and Drifting Towards Wide Minima
Peter L. Bartlett
Philip M. Long
Olivier Bousquet
76
34
0
04 Oct 2022
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition
Jianhao Ma
Li-Zhen Guo
S. Fattahi
41
4
0
01 Oct 2022
On Quantum Speedups for Nonconvex Optimization via Quantum Tunneling Walks
Yizhou Liu
Weijie J. Su
Tongyang Li
36
18
0
29 Sep 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
28
8
0
18 Feb 2022
Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the
O
(
ε
−
7
/
4
)
O(ε^{-7/4})
O
(
ε
−
7/4
)
Complexity
Huan Li
Zhouchen Lin
42
21
0
27 Jan 2022
Escape saddle points by a simple gradient-descent based algorithm
Chenyi Zhang
Tongyang Li
ODL
31
15
0
28 Nov 2021
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
26
26
0
25 Jun 2021
Escaping Saddle Points with Compressed SGD
Dmitrii Avdiukhin
G. Yaroslavtsev
22
4
0
21 May 2021
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs)
Zhiyuan Li
Sadhika Malladi
Sanjeev Arora
44
78
0
24 Feb 2021
Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Cong Fang
Hangfeng He
Qi Long
Weijie J. Su
FAtt
130
168
0
29 Jan 2021
Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
11
7
0
04 Oct 2020
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
87
53
0
24 Jun 2020
Stopping Criteria for, and Strong Convergence of, Stochastic Gradient Descent on Bottou-Curtis-Nocedal Functions
V. Patel
21
23
0
01 Apr 2020
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
13
179
0
09 Feb 2020
Momentum Improves Normalized SGD
Ashok Cutkosky
Harsh Mehta
ODL
18
118
0
09 Feb 2020
Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization
Stefan Vlaski
Ali H. Sayed
ODL
29
21
0
19 Aug 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
27
49
0
08 Jul 2019
Distributed Learning in Non-Convex Environments -- Part II: Polynomial Escape from Saddle-Points
Stefan Vlaski
Ali H. Sayed
27
53
0
03 Jul 2019
Stochastic Nested Variance Reduction for Nonconvex Optimization
Dongruo Zhou
Pan Xu
Quanquan Gu
25
146
0
20 Jun 2018
First-order Methods Almost Always Avoid Saddle Points
J. Lee
Ioannis Panageas
Georgios Piliouras
Max Simchowitz
Michael I. Jordan
Benjamin Recht
ODL
95
83
0
20 Oct 2017
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1