Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.11141
Cited By
MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms
27 November 2018
S. Shi
X. Chu
Bo Li
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms"
12 / 12 papers shown
Title
FedImpro: Measuring and Improving Client Update in Federated Learning
Zhenheng Tang
Yonggang Zhang
S. Shi
Xinmei Tian
Tongliang Liu
Bo Han
Xiaowen Chu
FedML
26
13
0
10 Feb 2024
Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training
Shengwei Li
Zhiquan Lai
Yanqi Hao
Weijie Liu
Ke-shi Ge
Xiaoge Deng
Dongsheng Li
KaiCheng Lu
19
10
0
25 May 2023
FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training
Zhenheng Tang
X. Chu
Ryan Yide Ran
Sunwoo Lee
S. Shi
Yonggang Zhang
Yuxin Wang
Alex Liang
A. Avestimehr
Chaoyang He
FedML
23
10
0
03 Mar 2023
Towards Efficient Communications in Federated Learning: A Contemporary Survey
Zihao Zhao
Yuzhu Mao
Yang Liu
Linqi Song
Ouyang Ye
Xinlei Chen
Wenbo Ding
FedML
57
60
0
02 Aug 2022
Accelerating Distributed K-FAC with Smart Parallelism of Computing and Communication Tasks
S. Shi
Lin Zhang
Bo-wen Li
40
9
0
14 Jul 2021
Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA
M. Wahib
Haoyu Zhang
Truong Thao Nguyen
Aleksandr Drozd
Jens Domke
Lingqi Zhang
Ryousei Takano
Satoshi Matsuoka
OODD
34
23
0
26 Aug 2020
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
35
175
0
14 Jan 2020
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
S. Shi
X. Chu
Bo Li
FedML
22
25
0
18 Dec 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
X. Chu
19
22
0
20 Nov 2019
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
33
76
0
19 Nov 2019
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
40
134
0
14 Jan 2019
1