Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1907.07346
Cited By
DeepSqueeze
\texttt{DeepSqueeze}
DeepSqueeze
: Decentralization Meets Error-Compensated Compression
17 July 2019
Hanlin Tang
Xiangru Lian
Shuang Qiu
Lei Yuan
Ce Zhang
Tong Zhang
Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"$\texttt{DeepSqueeze}$: Decentralization Meets Error-Compensated Compression"
21 / 21 papers shown
Title
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
33
217
0
15 May 2019
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
FedML
47
505
0
01 Feb 2019
Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Youjie Li
Hang Qiu
Songze Li
A. Avestimehr
Nam Sung Kim
Alex Schwing
FedML
42
104
0
08 Nov 2018
signSGD with Majority Vote is Communication Efficient And Fault Tolerant
Jeremy Bernstein
Jiawei Zhao
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
43
46
0
11 Oct 2018
Sparsified SGD with Memory
Sebastian U. Stich
Jean-Baptiste Cordonnier
Martin Jaggi
66
743
0
20 Sep 2018
COLA: Decentralized Linear Learning
Lie He
An Bian
Martin Jaggi
70
118
0
13 Aug 2018
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
59
235
0
21 Jun 2018
ATOMO: Communication-efficient Learning via Atomic Sparsification
Hongyi Wang
Scott Sievert
Zachary B. Charles
Shengchao Liu
S. Wright
Dimitris Papailiopoulos
55
351
0
11 Jun 2018
Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters
Pavel Dvurechensky
D. Dvinskikh
Alexander Gasnikov
César A. Uribe
Angelia Nedić
37
105
0
11 Jun 2018
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
Zebang Shen
Aryan Mokhtari
Tengfei Zhou
P. Zhao
Hui Qian
83
56
0
25 May 2018
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen
G. Giannakis
Tao Sun
W. Yin
51
298
0
25 May 2018
Communication Compression for Decentralized Training
Hanlin Tang
Shaoduo Gan
Ce Zhang
Tong Zhang
Ji Liu
41
272
0
17 Mar 2018
SparCML: High-Performance Sparse Communication for Machine Learning
Cédric Renggli
Saleh Ashkboos
Mehdi Aghagolzadeh
Dan Alistarh
Torsten Hoefler
50
126
0
22 Feb 2018
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
68
521
0
26 Oct 2017
Asynchronous Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Wei Zhang
Ce Zhang
Ji Liu
ODL
39
500
0
18 Oct 2017
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Ce Zhang
Huan Zhang
Cho-Jui Hsieh
Wei Zhang
Ji Liu
34
1,221
0
25 May 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
128
985
0
22 May 2017
TensorFlow: A system for large-scale machine learning
Martín Abadi
P. Barham
Jianmin Chen
Zhiwen Chen
Andy Davis
...
Vijay Vasudevan
Pete Warden
Martin Wicke
Yuan Yu
Xiaoqiang Zhang
GNN
AI4CE
338
18,300
0
27 May 2016
Distributed optimization over time-varying directed graphs
A. Nedić
Alexander Olshevsky
47
993
0
10 Mar 2013
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
142
2,272
0
28 Jun 2011
Distributed Delayed Stochastic Optimization
Alekh Agarwal
John C. Duchi
108
626
0
28 Apr 2011
1