ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.09126
  4. Cited By
Communication-Efficient Local Decentralized SGD Methods

Communication-Efficient Local Decentralized SGD Methods

21 October 2019
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
ArXivPDFHTML

Papers citing "Communication-Efficient Local Decentralized SGD Methods"

13 / 13 papers shown
Title
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax
  Problems
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems
Hongchang Gao
24
16
0
06 Dec 2022
FedCut: A Spectral Analysis Framework for Reliable Detection of
  Byzantine Colluders
FedCut: A Spectral Analysis Framework for Reliable Detection of Byzantine Colluders
Hanlin Gu
Lixin Fan
Xingxing Tang
Qiang Yang
AAML
FedML
22
1
0
24 Nov 2022
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized
  Federated Learning with Heterogeneous Data
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Xin Zhang
Minghong Fang
Zhuqing Liu
Haibo Yang
Jia-Wei Liu
Zhengyuan Zhu
FedML
20
14
0
17 Aug 2022
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
Xinteng Ma
Renyi Bao
Jinpeng Jiang
Yang Liu
Arthur Jiang
Junhua Yan
Xin Liu
Zhisong Pan
FedML
32
6
0
20 Jun 2022
Data-heterogeneity-aware Mixing for Decentralized Learning
Data-heterogeneity-aware Mixing for Decentralized Learning
Yatin Dandi
Anastasia Koloskova
Martin Jaggi
Sebastian U. Stich
38
18
0
13 Apr 2022
Federated Learning with Buffered Asynchronous Aggregation
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
33
288
0
11 Jun 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
31
0
04 Mar 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and
  Communication Efficiency
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
30
58
0
25 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
39
108
0
15 Feb 2021
Periodic Stochastic Gradient Descent with Momentum for Decentralized
  Training
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
15
25
0
24 Aug 2020
Federated Mutual Learning
Federated Mutual Learning
T. Shen
Jie Zhang
Xinkang Jia
Fengda Zhang
Gang Huang
Pan Zhou
Kun Kuang
Fei Wu
Chao-Xiang Wu
FedML
17
119
0
27 Jun 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
41
491
0
23 Mar 2020
On the Convergence of Local Descent Methods in Federated Learning
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
19
266
0
31 Oct 2019
1