ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.07023
  4. Cited By
Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex
  Optimization

Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization

19 August 2019
Stefan Vlaski
Ali H. Sayed
    ODL
ArXivPDFHTML

Papers citing "Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization"

13 / 13 papers shown
Title
On the Second-Order Convergence of Biased Policy Gradient Algorithms
On the Second-Order Convergence of Biased Policy Gradient Algorithms
Siqiao Mu
Diego Klabjan
50
2
0
05 Nov 2023
Exact Subspace Diffusion for Decentralized Multitask Learning
Exact Subspace Diffusion for Decentralized Multitask Learning
Shreya Wadehra
Roula Nassif
Stefan Vlaski
21
1
0
14 Apr 2023
Type-II Saddles and Probabilistic Stability of Stochastic Gradient
  Descent
Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent
Liu Ziyin
Botao Li
Tomer Galanti
Masakuni Ueda
37
7
0
23 Mar 2023
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the
  Bounded Gradient Assumption
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
16
1
0
15 Feb 2023
3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for
  Networked Multi-Agent Systems
3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for Networked Multi-Agent Systems
Adrian Redder
Arunselvan Ramaswamy
Holger Karl
OffRL
13
2
0
03 Jan 2022
Distributed Adaptive Learning Under Communication Constraints
Distributed Adaptive Learning Under Communication Constraints
Marco Carpentiero
Vincenzo Matta
Ali H. Sayed
29
17
0
03 Dec 2021
SGD with a Constant Large Learning Rate Can Converge to Local Maxima
SGD with a Constant Large Learning Rate Can Converge to Local Maxima
Liu Ziyin
Botao Li
James B. Simon
Masakuni Ueda
29
8
0
25 Jul 2021
Second-Order Guarantees in Federated Learning
Second-Order Guarantees in Federated Learning
Stefan Vlaski
Elsa Rizk
Ali H. Sayed
FedML
20
7
0
02 Dec 2020
On the Almost Sure Convergence of Stochastic Gradient Descent in
  Non-Convex Problems
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
P. Mertikopoulos
Nadav Hallak
Ali Kavis
V. Cevher
21
85
0
19 Jun 2020
Second-Order Guarantees in Centralized, Federated and Decentralized
  Nonconvex Optimization
Second-Order Guarantees in Centralized, Federated and Decentralized Nonconvex Optimization
Stefan Vlaski
Ali H. Sayed
23
5
0
31 Mar 2020
Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex
  Optimization
Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization
Stefan Vlaski
Ali H. Sayed
25
2
0
30 Oct 2019
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
Distributed Pareto Optimization via Diffusion Strategies
Distributed Pareto Optimization via Diffusion Strategies
Jianshu Chen
Ali H. Sayed
71
174
0
13 Aug 2012
1