ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.08694
  4. Cited By
Second-order Guarantees of Distributed Gradient Algorithms

Second-order Guarantees of Distributed Gradient Algorithms

23 September 2018
Amir Daneshmand
G. Scutari
Vyacheslav Kungurtsev
ArXivPDFHTML

Papers citing "Second-order Guarantees of Distributed Gradient Algorithms"

10 / 10 papers shown
Title
A Tutorial on Distributed Optimization for Cooperative Robotics: from Setups and Algorithms to Toolboxes and Research Directions
A Tutorial on Distributed Optimization for Cooperative Robotics: from Setups and Algorithms to Toolboxes and Research Directions
Andrea Testa
Guido Carnevale
G. Notarstefano
31
10
0
08 Sep 2023
Distributed Optimization Methods for Multi-Robot Systems: Part II -- A
  Survey
Distributed Optimization Methods for Multi-Robot Systems: Part II -- A Survey
O. Shorinwa
Trevor Halsted
Javier Yu
Mac Schwager
26
19
0
26 Jan 2023
Decentralized Nonconvex Optimization with Guaranteed Privacy and
  Accuracy
Decentralized Nonconvex Optimization with Guaranteed Privacy and Accuracy
Yongqiang Wang
Tamer Basar
26
21
0
14 Dec 2022
Distributed Sparse Regression via Penalization
Distributed Sparse Regression via Penalization
Yao Ji
G. Scutari
Ying Sun
Harsha Honnappa
22
5
0
12 Nov 2021
A Survey of Distributed Optimization Methods for Multi-Robot Systems
A Survey of Distributed Optimization Methods for Multi-Robot Systems
Trevor Halsted
O. Shorinwa
Javier Yu
Mac Schwager
30
39
0
23 Mar 2021
Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point
  Evasion
Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion
Brian Swenson
Ryan W. Murray
H. Vincent Poor
S. Kar
12
16
0
12 Aug 2020
Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex
  Optimization
Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization
Stefan Vlaski
Ali H. Sayed
ODL
29
21
0
19 Aug 2019
Distributed Gradient Descent: Nonconvergence to Saddle Points and the
  Stable-Manifold Theorem
Distributed Gradient Descent: Nonconvergence to Saddle Points and the Stable-Manifold Theorem
Brian Swenson
Ryan W. Murray
H. Vincent Poor
S. Kar
26
14
0
07 Aug 2019
Distributed Learning in Non-Convex Environments -- Part II: Polynomial
  Escape from Saddle-Points
Distributed Learning in Non-Convex Environments -- Part II: Polynomial Escape from Saddle-Points
Stefan Vlaski
Ali H. Sayed
27
53
0
03 Jul 2019
Distributed stochastic optimization with gradient tracking over
  strongly-connected networks
Distributed stochastic optimization with gradient tracking over strongly-connected networks
Ran Xin
Anit Kumar Sahu
U. Khan
S. Kar
18
111
0
18 Mar 2019
1