ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.07576
  4. Cited By
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms

Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms

22 August 2018
Jianyu Wang
Gauri Joshi
ArXivPDFHTML

Papers citing "Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms"

50 / 209 papers shown
Title
Differentially Private Federated Learning for Resource-Constrained
  Internet of Things
Differentially Private Federated Learning for Resource-Constrained Internet of Things
Rui Hu
Yuanxiong Guo
E. Ratazzi
Yanmin Gong
FedML
30
17
0
28 Mar 2020
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to
  Balance Communication Overhead, Computational Complexity, and Convergence
  Rate
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to Balance Communication Overhead, Computational Complexity, and Convergence Rate
Naeimeh Omidvar
M. Maddah-ali
Hamed Mahdavi
ODL
22
3
0
27 Mar 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
41
493
0
23 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
S. Shi
Wei Wang
Bo-wen Li
Xiaowen Chu
21
48
0
10 Mar 2020
Trends and Advancements in Deep Neural Network Communication
Trends and Advancements in Deep Neural Network Communication
Felix Sattler
Thomas Wiegand
Wojciech Samek
GNN
27
9
0
06 Mar 2020
Adaptive Federated Optimization
Adaptive Federated Optimization
Sashank J. Reddi
Zachary B. Charles
Manzil Zaheer
Zachary Garrett
Keith Rush
Jakub Konecný
Sanjiv Kumar
H. B. McMahan
FedML
20
1,391
0
29 Feb 2020
LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient
  Distributed Learning
LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient Distributed Learning
Tianyi Chen
Yuejiao Sun
W. Yin
FedML
22
14
0
26 Feb 2020
Network-Density-Controlled Decentralized Parallel Stochastic Gradient
  Descent in Wireless Systems
Network-Density-Controlled Decentralized Parallel Stochastic Gradient Descent in Wireless Systems
Koya Sato
Yasuyuki Satoh
D. Sugimura
18
1
0
25 Feb 2020
FMore: An Incentive Scheme of Multi-dimensional Auction for Federated
  Learning in MEC
FMore: An Incentive Scheme of Multi-dimensional Auction for Federated Learning in MEC
Rongfei Zeng
Shixun Zhang
Jiaqi Wang
X. Chu
FedML
11
180
0
22 Feb 2020
Communication-Efficient Edge AI: Algorithms and Systems
Communication-Efficient Edge AI: Algorithms and Systems
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
17
326
0
22 Feb 2020
Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays
  in Distributed SGD
Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays in Distributed SGD
Jianyu Wang
Hao Liang
Gauri Joshi
16
33
0
21 Feb 2020
Dynamic Federated Learning
Dynamic Federated Learning
Elsa Rizk
Stefan Vlaski
Ali H. Sayed
FedML
22
25
0
20 Feb 2020
Communication-Efficient Distributed SVD via Local Power Iterations
Communication-Efficient Distributed SVD via Local Power Iterations
Xiang Li
Shusen Wang
Kun Chen
Zhihua Zhang
35
21
0
19 Feb 2020
Personalized Federated Learning: A Meta-Learning Approach
Personalized Federated Learning: A Meta-Learning Approach
Alireza Fallah
Aryan Mokhtari
Asuman Ozdaglar
FedML
36
561
0
19 Feb 2020
Is Local SGD Better than Minibatch SGD?
Is Local SGD Better than Minibatch SGD?
Blake E. Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. B. McMahan
Ohad Shamir
Nathan Srebro
FedML
34
253
0
18 Feb 2020
Distributed Non-Convex Optimization with Sublinear Speedup under
  Intermittent Client Availability
Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability
Yikai Yan
Chaoyue Niu
Yucheng Ding
Zhenzhe Zheng
Fan Wu
Guihai Chen
Shaojie Tang
Zhihua Wu
FedML
49
37
0
18 Feb 2020
Faster On-Device Training Using New Federated Momentum Algorithm
Faster On-Device Training Using New Federated Momentum Algorithm
Zhouyuan Huo
Qian Yang
Bin Gu
Heng-Chiao Huang
FedML
22
47
0
06 Feb 2020
Elastic Consistency: A General Consistency Model for Distributed
  Stochastic Gradient Descent
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent
Giorgi Nadiradze
Ilia Markov
Bapi Chatterjee
Vyacheslav Kungurtsev
Dan Alistarh
FedML
22
14
0
16 Jan 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An
  Online Learning Approach
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
29
175
0
14 Jan 2020
FedDANE: A Federated Newton-Type Method
FedDANE: A Federated Newton-Type Method
Tian Li
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
FedML
18
155
0
07 Jan 2020
Think Locally, Act Globally: Federated Learning with Local and Global
  Representations
Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang
Terrance Liu
Liu Ziyin
Nicholas B. Allen
Randy P. Auerbach
David Brent
Ruslan Salakhutdinov
Louis-Philippe Morency
FedML
32
548
0
06 Jan 2020
Variance Reduced Local SGD with Lower Communication Complexity
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
24
152
0
30 Dec 2019
Distributed Fixed Point Methods with Compressed Iterates
Distributed Fixed Point Methods with Compressed Iterates
Sélim Chraibi
Ahmed Khaled
D. Kovalev
Peter Richtárik
Adil Salim
Martin Takávc
FedML
9
16
0
20 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
74
6,079
0
10 Dec 2019
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with
  Adaptive Learning Rates
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates
Cong Xie
Oluwasanmi Koyejo
Indranil Gupta
Yanghua Peng
26
41
0
20 Nov 2019
On the Convergence of Local Descent Methods in Federated Learning
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
19
266
0
31 Oct 2019
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive
  Synchronization
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
V. Cadambe
FedML
33
199
0
30 Oct 2019
Federated Learning over Wireless Networks: Convergence Analysis and
  Resource Allocation
Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation
Canh T. Dinh
N. H. Tran
Minh N. H. Nguyen
Choong Seon Hong
Wei Bao
Albert Y. Zomaya
Vincent Gramoli
FedML
17
329
0
29 Oct 2019
Asynchronous Decentralized SGD with Quantized and Local Updates
Asynchronous Decentralized SGD with Quantized and Local Updates
Giorgi Nadiradze
Amirmojtaba Sabour
Peter Davies
Shigang Li
Dan Alistarh
21
49
0
27 Oct 2019
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
30
53
0
21 Oct 2019
Central Server Free Federated Learning over Single-sided Trust Social
  Networks
Central Server Free Federated Learning over Single-sided Trust Social Networks
Chaoyang He
Conghui Tan
Hanlin Tang
Shuang Qiu
Ji Liu
FedML
18
73
0
11 Oct 2019
SlowMo: Improving Communication-Efficient Distributed SGD with Slow
  Momentum
SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum
Jianyu Wang
Vinayak Tantia
Nicolas Ballas
Michael G. Rabbat
12
200
0
01 Oct 2019
FedPAQ: A Communication-Efficient Federated Learning Method with
  Periodic Averaging and Quantization
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ali Jadbabaie
Ramtin Pedarsani
FedML
174
760
0
28 Sep 2019
Matrix Sketching for Secure Collaborative Machine Learning
Matrix Sketching for Secure Collaborative Machine Learning
Mengjiao Zhang
Shusen Wang
FedML
18
14
0
24 Sep 2019
Communication-Efficient Distributed Learning via Lazily Aggregated
  Quantized Gradients
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
14
93
0
17 Sep 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed
  Gradients and Compressed Communication
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
22
20
0
11 Sep 2019
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
35
426
0
10 Sep 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
21
22
0
10 Sep 2019
First Analysis of Local GD on Heterogeneous Data
First Analysis of Local GD on Heterogeneous Data
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
FedML
15
172
0
10 Sep 2019
Distributed Deep Learning with Event-Triggered Communication
Distributed Deep Learning with Event-Triggered Communication
Jemin George
Prudhvi K. Gurram
8
16
0
08 Sep 2019
Federated Learning: Challenges, Methods, and Future Directions
Federated Learning: Challenges, Methods, and Future Directions
Tian Li
Anit Kumar Sahu
Ameet Talwalkar
Virginia Smith
FedML
39
4,417
0
21 Aug 2019
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized Deep Learning with Arbitrary Communication Compression
Anastasia Koloskova
Tao R. Lin
Sebastian U. Stich
Martin Jaggi
FedML
28
233
0
22 Jul 2019
On the Convergence of FedAvg on Non-IID Data
On the Convergence of FedAvg on Non-IID Data
Xiang Li
Kaixuan Huang
Wenhao Yang
Shusen Wang
Zhihua Zhang
FedML
64
2,283
0
04 Jul 2019
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
MQ
15
400
0
06 Jun 2019
Fair Resource Allocation in Federated Learning
Fair Resource Allocation in Federated Learning
Tian Li
Maziar Sanjabi
Ahmad Beirami
Virginia Smith
FedML
16
781
0
25 May 2019
Decentralized Bayesian Learning over Graphs
Decentralized Bayesian Learning over Graphs
Anusha Lalitha
Xinghan Wang
O. Kilinc
Y. Lu
T. Javidi
F. Koushanfar
FedML
28
25
0
24 May 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
On the Computation and Communication Complexity of Parallel SGD with
  Dynamic Batch Sizes for Stochastic Non-Convex Optimization
On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization
Hao Yu
R. L. Jin
23
50
0
10 May 2019
On the Linear Speedup Analysis of Communication Efficient Momentum SGD
  for Distributed Non-Convex Optimization
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization
Hao Yu
R. L. Jin
Sen Yang
FedML
43
379
0
09 May 2019
Trajectory Normalized Gradients for Distributed Optimization
Trajectory Normalized Gradients for Distributed Optimization
Jianqiao Wangni
Ke Li
Jianbo Shi
Jitendra Malik
11
2
0
24 Jan 2019
Previous
12345
Next