ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.10422
  4. Cited By
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates

A Unified Theory of Decentralized SGD with Changing Topology and Local Updates

23 March 2020
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
    FedML
ArXivPDFHTML

Papers citing "A Unified Theory of Decentralized SGD with Changing Topology and Local Updates"

50 / 71 papers shown
Title
Distribution-Aware Mobility-Assisted Decentralized Federated Learning
Distribution-Aware Mobility-Assisted Decentralized Federated Learning
Md. Farhamdur Reza
Reza Jahani
Richeng Jin
H. Dai
16
0
0
24 May 2025
Pseudo-Asynchronous Local SGD: Robust and Efficient Data-Parallel Training
Pseudo-Asynchronous Local SGD: Robust and Efficient Data-Parallel Training
Hiroki Naganuma
Xinzhi Zhang
Man-Chung Yue
Ioannis Mitliagkas
Philipp A. Witte
Russell J. Hewett
Yin Tat Lee
135
0
0
25 Apr 2025
Decentralized Federated Domain Generalization with Style Sharing: A Formal Modeling and Convergence Analysis
Decentralized Federated Domain Generalization with Style Sharing: A Formal Modeling and Convergence Analysis
Shahryar Zehtabi
Dong-Jun Han
Seyyedali Hosseinalipour
Christopher G. Brinton
FedML
AI4CE
82
0
0
08 Apr 2025
Scalable Decentralized Algorithms for Online Personalized Mean Estimation
Scalable Decentralized Algorithms for Online Personalized Mean Estimation
Franco Galante
Giovanni Neglia
Emilio Leonardi
FedML
120
1
0
20 Feb 2025
A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration
A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration
Yuchen Hu
Xi Chen
Weidong Liu
Xiaojun Mao
78
0
0
31 Jan 2025
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
71
0
0
08 Jan 2025
Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization
Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization
Dun Zeng
Zheshun Wu
Shiyu Liu
Yu Pan
Xiaoying Tang
Zenglin Xu
MLT
FedML
111
1
0
25 Nov 2024
Peer-to-Peer Learning Dynamics of Wide Neural Networks
Peer-to-Peer Learning Dynamics of Wide Neural Networks
Shreyas Chaudhari
Srinivasa Pranav
Emile Anand
José M. F. Moura
64
3
0
23 Sep 2024
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
64
5
0
06 Jun 2024
Robust Decentralized Learning with Local Updates and Gradient Tracking
Robust Decentralized Learning with Local Updates and Gradient Tracking
Sajjad Ghiasvand
Amirhossein Reisizadeh
Mahnoosh Alizadeh
Ramtin Pedarsani
72
4
0
02 May 2024
Faster Convergence with Less Communication: Broadcast-Based Subgraph Sampling for Decentralized Learning over Wireless Networks
Faster Convergence with Less Communication: Broadcast-Based Subgraph Sampling for Decentralized Learning over Wireless Networks
Daniel Pérez Herrera
Zheng Chen
Erik G. Larsson
96
1
0
24 Jan 2024
Communication-Efficient Federated Optimization over Semi-Decentralized Networks
Communication-Efficient Federated Optimization over Semi-Decentralized Networks
He Wang
Yuejie Chi
FedML
93
2
0
30 Nov 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved Convergence
Kun-Yen Huang
Linli Zhou
Shi Pu
67
4
0
21 Jun 2023
SLowcal-SGD: Slow Query Points Improve Local-SGD for Stochastic Convex Optimization
SLowcal-SGD: Slow Query Points Improve Local-SGD for Stochastic Convex Optimization
Kfir Y. Levy
Kfir Y. Levy
FedML
63
3
0
09 Apr 2023
Federated Minimax Optimization: Improved Convergence Analyses and
  Algorithms
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
66
49
0
09 Mar 2022
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning
Sai Praneeth Karimireddy
Martin Jaggi
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
66
217
0
08 Aug 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
67
75
0
18 Jun 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
  Convergence
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
38
185
0
24 Feb 2020
Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays
  in Distributed SGD
Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays in Distributed SGD
Jianyu Wang
Hao Liang
Gauri Joshi
32
33
0
21 Feb 2020
Is Local SGD Better than Minibatch SGD?
Is Local SGD Better than Minibatch SGD?
Blake E. Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. B. McMahan
Ohad Shamir
Nathan Srebro
FedML
54
254
0
18 Feb 2020
FedDANE: A Federated Newton-Type Method
FedDANE: A Federated Newton-Type Method
Tian Li
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
FedML
117
156
0
07 Jan 2020
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
126
6,177
0
10 Dec 2019
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
44
53
0
21 Oct 2019
Robust Distributed Accelerated Stochastic Gradient Methods for
  Multi-Agent Networks
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
Umut Simsekli
Lingjiong Zhu
55
27
0
19 Oct 2019
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
Sai Praneeth Karimireddy
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
47
345
0
14 Oct 2019
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
53
432
0
10 Sep 2019
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized Deep Learning with Arbitrary Communication Compression
Anastasia Koloskova
Tao R. Lin
Sebastian U. Stich
Martin Jaggi
FedML
37
234
0
22 Jul 2019
$\texttt{DeepSqueeze}$: Decentralization Meets Error-Compensated
  Compression
DeepSqueeze\texttt{DeepSqueeze}DeepSqueeze: Decentralization Meets Error-Compensated Compression
Hanlin Tang
Xiangru Lian
Shuang Qiu
Lei Yuan
Ce Zhang
Tong Zhang
Liu
28
49
0
17 Jul 2019
Unified Optimal Analysis of the (Stochastic) Gradient Method
Unified Optimal Analysis of the (Stochastic) Gradient Method
Sebastian U. Stich
54
113
0
09 Jul 2019
On the Convergence of FedAvg on Non-IID Data
On the Convergence of FedAvg on Non-IID Data
Xiang Li
Kaixuan Huang
Wenhao Yang
Shusen Wang
Zhihua Zhang
FedML
125
2,311
0
04 Jul 2019
A Sharp Estimate on the Transient Time of Distributed Stochastic
  Gradient Descent
A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent
Shi Pu
Alexander Olshevsky
I. Paschalidis
45
18
0
06 Jun 2019
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
MQ
38
403
0
06 Jun 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
48
162
0
23 May 2019
Revisiting Randomized Gossip Algorithms: General Framework, Convergence
  Rates and Novel Block and Accelerated Protocols
Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols
Nicolas Loizou
Peter Richtárik
35
35
0
20 May 2019
Communication trade-offs for synchronized distributed SGD with large
  step size
Communication trade-offs for synchronized distributed SGD with large step size
Kumar Kshitij Patel
Aymeric Dieuleveut
FedML
42
27
0
25 Apr 2019
Decentralized Stochastic Optimization and Gossip Algorithms with
  Compressed Communication
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
FedML
49
505
0
01 Feb 2019
SGD: General Analysis and Improved Rates
SGD: General Analysis and Improved Rates
Robert Mansel Gower
Nicolas Loizou
Xun Qian
Alibek Sailanbayev
Egor Shulgin
Peter Richtárik
59
378
0
27 Jan 2019
Federated Optimization in Heterogeneous Networks
Federated Optimization in Heterogeneous Networks
Tian Li
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
FedML
85
5,105
0
14 Dec 2018
Stochastic Gradient Push for Distributed Deep Learning
Stochastic Gradient Push for Distributed Deep Learning
Mahmoud Assran
Nicolas Loizou
Nicolas Ballas
Michael G. Rabbat
57
342
0
27 Nov 2018
New Convergence Aspects of Stochastic Gradient Algorithms
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
102
66
0
10 Nov 2018
Sparsified SGD with Memory
Sparsified SGD with Memory
Sebastian U. Stich
Jean-Baptiste Cordonnier
Martin Jaggi
66
743
0
20 Sep 2018
Distributed Nonconvex Constrained Optimization over Time-Varying
  Digraphs
Distributed Nonconvex Constrained Optimization over Time-Varying Digraphs
G. Scutari
Ying Sun
71
173
0
04 Sep 2018
A Dual Approach for Optimal Algorithms in Distributed Optimization over
  Networks
A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks
César A. Uribe
Soomin Lee
Alexander Gasnikov
A. Nedić
34
137
0
03 Sep 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
101
348
0
22 Aug 2018
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
107
432
0
22 Aug 2018
COLA: Decentralized Linear Learning
COLA: Decentralized Linear Learning
Lie He
An Bian
Martin Jaggi
70
118
0
13 Aug 2018
Parallel Restarted SGD with Faster Convergence and Less Communication:
  Demystifying Why Model Averaging Works for Deep Learning
Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning
Hao Yu
Sen Yang
Shenghuo Zhu
MoMe
FedML
60
602
0
17 Jul 2018
An Exact Quantized Decentralized Gradient Descent Algorithm
An Exact Quantized Decentralized Gradient Descent Algorithm
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ramtin Pedarsani
38
123
0
29 Jun 2018
Federated Learning with Non-IID Data
Federated Learning with Non-IID Data
Yue Zhao
Meng Li
Liangzhen Lai
Naveen Suda
Damon Civin
Vikas Chandra
FedML
134
2,547
0
02 Jun 2018
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic
  Optimization
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization
Blake E. Woodworth
Jialei Wang
Adam D. Smith
H. B. McMahan
Nathan Srebro
51
123
0
25 May 2018
12
Next