Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.13021
Cited By
Efficiently avoiding saddle points with zero order methods: No gradients required
29 October 2019
Lampros Flokas
Emmanouil-Vasileios Vlatakis-Gkaragkounis
Georgios Piliouras
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficiently avoiding saddle points with zero order methods: No gradients required"
7 / 7 papers shown
Title
Comparisons Are All You Need for Optimizing Smooth Functions
Chenyi Zhang
Tongyang Li
AAML
37
1
0
19 May 2024
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
19
1
0
15 Feb 2023
Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients
Hualin Zhang
Huan Xiong
Bin Gu
35
7
0
04 Oct 2022
Stochastic Gradient Langevin Dynamics with Variance Reduction
Zhishen Huang
Stephen Becker
15
7
0
12 Feb 2021
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
P. Mertikopoulos
Nadav Hallak
Ali Kavis
V. Cevher
30
85
0
19 Jun 2020
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
Sijia Liu
Pin-Yu Chen
B. Kailkhura
Gaoyuan Zhang
A. Hero III
P. Varshney
26
224
0
11 Jun 2020
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,186
0
30 Nov 2014
1