ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.05445
  4. Cited By
Stability of Decentralized Gradient Descent in Open Multi-Agent Systems

Stability of Decentralized Gradient Descent in Open Multi-Agent Systems

11 September 2020
Julien Hendrickx
Michael G. Rabbat
ArXiv (abs)PDFHTML

Papers citing "Stability of Decentralized Gradient Descent in Open Multi-Agent Systems"

4 / 4 papers shown
Title
Secure Computation for Machine Learning With SPDZ
Secure Computation for Machine Learning With SPDZ
Valerie Chen
Valerio Pastro
Mariana Raykova
46
64
0
02 Jan 2019
Network Topology and Communication-Computation Tradeoffs in
  Decentralized Optimization
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
A. Nedić
Alexander Olshevsky
Michael G. Rabbat
79
513
0
26 Sep 2017
Distributed Online Optimization in Dynamic Environments Using Mirror
  Descent
Distributed Online Optimization in Dynamic Environments Using Mirror Descent
Shahin Shahrampour
Ali Jadbabaie
249
282
0
09 Sep 2016
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
412
17,615
0
17 Feb 2016
1