Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1712.01887
Cited By
v1
v2
v3 (latest)
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
5 December 2017
Chengyue Wu
Song Han
Huizi Mao
Yu Wang
W. Dally
Re-assign community
ArXiv (abs)
PDF
HTML
Github (222★)
Papers citing
"Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training"
25 / 625 papers shown
Title
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
196
350
0
22 Aug 2018
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
123
432
0
22 Aug 2018
A study on speech enhancement using exponent-only floating point quantized neural network (EOFP-QNN)
Y. Hsu
Yu-Chen Lin
Szu-Wei Fu
Yu Tsao
Tei-Wei Kuo
MQ
48
15
0
17 Aug 2018
RedSync : Reducing Synchronization Traffic for Distributed Deep Learning
Jiarui Fang
Haohuan Fu
Guangwen Yang
Cho-Jui Hsieh
GNN
106
25
0
13 Aug 2018
Pushing the boundaries of parallel Deep Learning -- A practical approach
Paolo Viviani
M. Drocco
Marco Aldinucci
OOD
42
0
0
25 Jun 2018
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
88
237
0
21 Jun 2018
ATOMO: Communication-efficient Learning via Atomic Sparsification
Hongyi Wang
Scott Sievert
Zachary B. Charles
Shengchao Liu
S. Wright
Dimitris Papailiopoulos
95
356
0
11 Jun 2018
The Effect of Network Width on the Performance of Large-batch Training
Lingjiao Chen
Hongyi Wang
Jinman Zhao
Dimitris Papailiopoulos
Paraschos Koutris
87
22
0
11 Jun 2018
Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark
Cody Coleman
Daniel Kang
Deepak Narayanan
Luigi Nardi
Tian Zhao
Jian Zhang
Peter Bailis
K. Olukotun
Christopher Ré
Matei A. Zaharia
60
117
0
04 Jun 2018
Federated Learning with Non-IID Data
Yue Zhao
Meng Li
Liangzhen Lai
Naveen Suda
Damon Civin
Vikas Chandra
FedML
189
2,602
0
02 Jun 2018
Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training
Maohua Zhu
Jason Clemons
Jeff Pool
Minsoo Rhu
S. Keckler
Yuan Xie
50
13
0
01 Jun 2018
Grow and Prune Compact, Fast, and Accurate LSTMs
Xiaoliang Dai
Hongxu Yin
N. Jha
VLM
SyDa
61
91
0
30 May 2018
cpSGD: Communication-efficient and differentially-private distributed SGD
Naman Agarwal
A. Suresh
Felix X. Yu
Sanjiv Kumar
H. B. McMahan
FedML
136
492
0
27 May 2018
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
205
1,072
0
24 May 2018
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
60
215
0
22 May 2018
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training
Liang Luo
Jacob Nelson
Luis Ceze
Amar Phanishayee
Arvind Krishnamurthy
154
121
0
21 May 2018
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis
Congzheng Song
Emiliano De Cristofaro
Vitaly Shmatikov
FedML
183
1,488
0
10 May 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
85
711
0
26 Feb 2018
SparCML: High-Performance Sparse Communication for Machine Learning
Cédric Renggli
Saleh Ashkboos
Mehdi Aghagolzadeh
Dan Alistarh
Torsten Hoefler
91
127
0
22 Feb 2018
3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning
Hyeontaek Lim
D. Andersen
M. Kaminsky
134
70
0
21 Feb 2018
On Scale-out Deep Learning Training for Cloud and HPC
Srinivas Sridharan
K. Vaidyanathan
Dhiraj D. Kalamkar
Dipankar Das
Mikhail E. Smorkalov
...
Dheevatsa Mudigere
Naveen Mellempudi
Sasikanth Avancha
Bharat Kaul
Pradeep Dubey
BDL
70
30
0
24 Jan 2018
Differentially Private Federated Learning: A Client Level Perspective
Robin C. Geyer
T. Klein
Moin Nabi
FedML
148
1,303
0
20 Dec 2017
Differentially Private Distributed Learning for Language Modeling Tasks
Vadim Popov
Mikhail Kudinov
Irina Piontkovskaya
Petr Vytovtov
A. Nevidomsky
FedML
43
3
0
20 Dec 2017
Training Simplification and Model Simplification for Deep Learning: A Minimal Effort Back Propagation Method
Xu Sun
Xuancheng Ren
Shuming Ma
Bingzhen Wei
Wei Li
Jingjing Xu
Houfeng Wang
Yi Zhang
56
24
0
17 Nov 2017
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
114
530
0
26 Oct 2017
Previous
1
2
3
...
11
12
13