Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.04596
Cited By
Inertial Newton Algorithms Avoiding Strict Saddle Points
8 November 2021
Camille Castera
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Inertial Newton Algorithms Avoiding Strict Saddle Points"
8 / 8 papers shown
Title
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
P. Mertikopoulos
Nadav Hallak
Ali Kavis
Volkan Cevher
48
88
0
19 Jun 2020
Convergence to minima for the continuous version of Backtracking Gradient Descent
T. Truong
34
18
0
11 Nov 2019
An Inertial Newton Algorithm for Deep Learning
Camille Castera
Jérôme Bolte
Cédric Févotte
Edouard Pauwels
PINN
ODL
50
63
0
29 May 2019
Understanding the Acceleration Phenomenon via High-Resolution Differential Equations
Bin Shi
S. Du
Michael I. Jordan
Weijie J. Su
51
259
0
21 Oct 2018
Backtracking gradient descent method for general
C
1
C^1
C
1
functions, with applications to Deep Learning
T. Truong
T. H. Nguyen
47
10
0
15 Aug 2018
Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
Ioannis Panageas
Georgios Piliouras
49
142
0
02 May 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
151
1,166
0
04 Mar 2015
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
123
1,383
0
10 Jun 2014
1