Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.10505
Cited By
The Convergence of Sparsified Gradient Methods
27 September 2018
Dan Alistarh
Torsten Hoefler
M. Johansson
Sarit Khirirat
Nikola Konstantinov
Cédric Renggli
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Convergence of Sparsified Gradient Methods"
25 / 125 papers shown
Title
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
32
65
0
21 May 2020
Communication-Efficient Gradient Coding for Straggler Mitigation in Distributed Learning
S. Kadhe
O. O. Koyluoglu
Kannan Ramchandran
32
11
0
14 May 2020
A Federated Learning Framework for Healthcare IoT devices
Binhang Yuan
Song Ge
Wenhui Xing
FedML
OOD
23
64
0
07 May 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
41
9
0
11 Apr 2020
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
27
62
0
25 Feb 2020
Communication-Efficient Edge AI: Algorithms and Systems
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
29
327
0
22 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
32
61
0
20 Feb 2020
COKE: Communication-Censored Decentralized Kernel Learning
Ping Xu
Yue Wang
Xiang Chen
Z. Tian
15
20
0
28 Jan 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
35
175
0
14 Jan 2020
Understanding Top-k Sparsification in Distributed Deep Learning
S. Shi
Xiaowen Chu
Ka Chun Cheung
Simon See
30
95
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
19
22
0
20 Nov 2019
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
33
77
0
19 Nov 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
38
447
0
26 Sep 2019
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
30
93
0
17 Sep 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
Federated Learning over Wireless Fading Channels
M. Amiri
Deniz Gunduz
33
508
0
23 Jul 2019
Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Shuheng Shen
Linli Xu
Jingchang Liu
Xianfeng Liang
Yifei Cheng
ODL
FedML
29
24
0
28 Jun 2019
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
Distributed Learning with Sublinear Communication
Jayadev Acharya
Christopher De Sa
Dylan J. Foster
Karthik Sridharan
FedML
21
40
0
28 Feb 2019
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
Sindri Magnússon
H. S. Ghadikolaei
Na Li
27
81
0
26 Feb 2019
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
40
135
0
14 Jan 2019
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
36
212
0
22 May 2018
SparCML: High-Performance Sparse Communication for Machine Learning
Cédric Renggli
Saleh Ashkboos
Mehdi Aghagolzadeh
Dan Alistarh
Torsten Hoefler
29
126
0
22 Feb 2018
Previous
1
2
3