Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.05021
Cited By
Sparse Communication for Distributed Gradient Descent
17 April 2017
Alham Fikri Aji
Kenneth Heafield
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sparse Communication for Distributed Gradient Descent"
47 / 147 papers shown
Title
Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data
Sohei Itahara
Takayuki Nishio
Yusuke Koda
M. Morikura
Koji Yamamoto
FedML
25
251
0
14 Aug 2020
On the Convergence of SGD with Biased Gradients
Ahmad Ajalloeian
Sebastian U. Stich
6
84
0
31 Jul 2020
Privacy-preserving Artificial Intelligence Techniques in Biomedicine
Reihaneh Torkzadehmahani
Reza Nasirigerdeh
David B. Blumenthal
T. Kacprowski
M. List
...
Harald H. H. W. Schmidt
A. Schwalber
Christof Tschohl
Andrea Wohner
Jan Baumbach
26
60
0
22 Jul 2020
Multi-Armed Bandit Based Client Scheduling for Federated Learning
Wenchao Xia
Tony Q.S. Quek
Kun Guo
Wanli Wen
Howard H. Yang
Hongbo Zhu
FedML
55
218
0
05 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
42
274
0
02 Jul 2020
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Yanghua Peng
Yida Wang
R. Arora
Xin Jin
25
70
0
17 Jun 2020
Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data
Chengxu Yang
Qipeng Wang
Mengwei Xu
Shangguang Wang
Kaigui Bian
Yunxin Liu
Xuanzhe Liu
24
22
0
12 Jun 2020
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
32
65
0
21 May 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
41
9
0
11 Apr 2020
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Sicong Zhuang
Cristiano Malossi
Marc Casas
24
0
0
05 Apr 2020
Privacy-preserving Incremental ADMM for Decentralized Consensus Optimization
Yu Ye
Hao Chen
Ming Xiao
Mikael Skoglund
H. Vincent Poor
24
28
0
24 Mar 2020
Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of Partitioned Edge Learning
Dingzhu Wen
M. Bennis
Kaibin Huang
31
48
0
10 Mar 2020
Ternary Compression for Communication-Efficient Federated Learning
Jinjin Xu
W. Du
Ran Cheng
Wangli He
Yaochu Jin
MQ
FedML
47
174
0
07 Mar 2020
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
27
62
0
25 Feb 2020
Distributed Training of Deep Neural Network Acoustic Models for Automatic Speech Recognition
Xiaodong Cui
Wei Zhang
Ulrich Finkler
G. Saon
M. Picheny
David S. Kung
27
19
0
24 Feb 2020
Communication Efficient Federated Learning over Multiple Access Channels
Wei-Ting Chang
Ravi Tandon
FedML
23
44
0
23 Jan 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
35
175
0
14 Jan 2020
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
42
152
0
30 Dec 2019
Randomized Reactive Redundancy for Byzantine Fault-Tolerance in Parallelized Learning
Nirupam Gupta
Nitin H. Vaidya
FedML
38
8
0
19 Dec 2019
Understanding Top-k Sparsification in Distributed Deep Learning
S. Shi
Xiaowen Chu
Ka Chun Cheung
Simon See
30
95
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
19
22
0
20 Nov 2019
JSDoop and TensorFlow.js: Volunteer Distributed Web Browser-Based Neural Network Training
José Á. Morell
Andrés Camero
Enrique Alba
29
9
0
12 Oct 2019
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
Zhouyuan Huo
Heng-Chiao Huang
FedML
19
5
0
09 Oct 2019
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
30
93
0
17 Sep 2019
An End-to-End Encrypted Neural Network for Gradient Updates Transmission in Federated Learning
Hongyu Li
Tianqi Han
FedML
19
32
0
22 Aug 2019
Federated Learning over Wireless Fading Channels
M. Amiri
Deniz Gunduz
33
508
0
23 Jul 2019
Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Shuheng Shen
Linli Xu
Jingchang Liu
Xianfeng Liang
Yifei Cheng
ODL
FedML
29
24
0
28 Jun 2019
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
Priority-based Parameter Propagation for Distributed DNN Training
Anand Jayarajan
Jinliang Wei
Garth A. Gibson
Alexandra Fedorova
Gennady Pekhimenko
AI4CE
22
178
0
10 May 2019
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
24
1,337
0
07 Mar 2019
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
40
135
0
14 Jan 2019
Broadband Analog Aggregation for Low-Latency Federated Edge Learning (Extended Version)
Guangxu Zhu
Yong Wang
Kaibin Huang
FedML
41
638
0
30 Dec 2018
A Hitchhiker's Guide On Distributed Training of Deep Neural Networks
K. Chahal
Manraj Singh Grover
Kuntal Dey
3DH
OOD
6
53
0
28 Oct 2018
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
57
429
0
22 Aug 2018
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
24
235
0
21 Jun 2018
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen
G. Giannakis
Tao Sun
W. Yin
34
297
0
25 May 2018
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
85
1,047
0
24 May 2018
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
36
212
0
22 May 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
SparCML: High-Performance Sparse Communication for Machine Learning
Cédric Renggli
Saleh Ashkboos
Mehdi Aghagolzadeh
Dan Alistarh
Torsten Hoefler
29
126
0
22 Feb 2018
3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning
Hyeontaek Lim
D. Andersen
M. Kaminsky
21
70
0
21 Feb 2018
Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutes
Igor Adamski
R. Adamski
T. Grel
Adam Jedrych
Kamil Kaczmarek
Henryk Michalewski
OffRL
41
37
0
09 Jan 2018
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Chengyue Wu
Song Han
Huizi Mao
Yu Wang
W. Dally
59
1,388
0
05 Dec 2017
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
15
522
0
26 Oct 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
20
984
0
22 May 2017
Previous
1
2
3