Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09847
Cited By
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
28 January 2019
Sai Praneeth Karimireddy
Quentin Rebjock
Sebastian U. Stich
Martin Jaggi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Error Feedback Fixes SignSGD and other Gradient Compression Schemes"
13 / 113 papers shown
Title
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
27
63
0
25 Feb 2020
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
S. Shi
Xiaowen Chu
FedML
21
57
0
22 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
32
61
0
20 Feb 2020
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
42
152
0
30 Dec 2019
Understanding Top-k Sparsification in Distributed Deep Learning
S. Shi
Xiaowen Chu
Ka Chun Cheung
Simon See
30
95
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
19
22
0
20 Nov 2019
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning
XINYAN DAI
Xiao Yan
Kaiwen Zhou
Han Yang
K. K. Ng
James Cheng
Yu Fan
FedML
27
47
0
12 Nov 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
38
447
0
26 Sep 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
19
317
0
31 May 2019
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
57
429
0
22 Aug 2018
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
36
211
0
22 May 2018
Previous
1
2
3