ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.10359
  4. Cited By
Achieving Linear Convergence in Distributed Asynchronous Multi-agent
  Optimization

Achieving Linear Convergence in Distributed Asynchronous Multi-agent Optimization

28 March 2018
Ye Tian
Ying Sun
G. Scutari
ArXivPDFHTML

Papers citing "Achieving Linear Convergence in Distributed Asynchronous Multi-agent Optimization"

6 / 6 papers shown
Title
Asynchronous Decentralized SGD under Non-Convexity: A Block-Coordinate Descent Framework
Asynchronous Decentralized SGD under Non-Convexity: A Block-Coordinate Descent Framework
Yijie Zhou
Shi Pu
24
0
0
15 May 2025
A Tutorial on Distributed Optimization for Cooperative Robotics: from Setups and Algorithms to Toolboxes and Research Directions
A Tutorial on Distributed Optimization for Cooperative Robotics: from Setups and Algorithms to Toolboxes and Research Directions
Andrea Testa
Guido Carnevale
G. Notarstefano
39
10
0
08 Sep 2023
Robust Fully-Asynchronous Methods for Distributed Training over General
  Architecture
Robust Fully-Asynchronous Methods for Distributed Training over General Architecture
Zehan Zhu
Ye Tian
Yan Huang
Jinming Xu
Shibo He
OOD
32
2
0
21 Jul 2023
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
38
72
0
15 Jun 2020
A Robust Gradient Tracking Method for Distributed Optimization over
  Directed Networks
A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks
Shi Pu
29
38
0
31 Mar 2020
Asynchronous Gradient-Push
Asynchronous Gradient-Push
Mahmoud Assran
Michael G. Rabbat
22
61
0
23 Mar 2018
1