ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.02132
  4. Cited By
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

7 October 2016
Dan Alistarh
Demjan Grubic
Jerry Li
Ryota Tomioka
Milan Vojnović
    MQ
ArXivPDFHTML

Papers citing "QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding"

28 / 128 papers shown
Title
Federated Accelerated Stochastic Gradient Descent
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
30
172
0
16 Jun 2020
Communication-Efficient Gradient Coding for Straggler Mitigation in
  Distributed Learning
Communication-Efficient Gradient Coding for Straggler Mitigation in Distributed Learning
S. Kadhe
O. O. Koyluoglu
Kannan Ramchandran
32
11
0
14 May 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
41
9
0
11 Apr 2020
A Robust Gradient Tracking Method for Distributed Optimization over
  Directed Networks
A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks
Shi Pu
29
38
0
31 Mar 2020
Dynamic Sampling and Selective Masking for Communication-Efficient
  Federated Learning
Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
Shaoxiong Ji
Wenqi Jiang
A. Walid
Xue Li
FedML
28
66
0
21 Mar 2020
A Compressive Sensing Approach for Federated Learning over Massive MIMO
  Communication Systems
A Compressive Sensing Approach for Federated Learning over Massive MIMO Communication Systems
Yo-Seb Jeon
M. Amiri
Jun Li
H. Vincent Poor
30
9
0
18 Mar 2020
Ternary Compression for Communication-Efficient Federated Learning
Ternary Compression for Communication-Efficient Federated Learning
Jinjin Xu
W. Du
Ran Cheng
Wangli He
Yaochu Jin
MQ
FedML
47
174
0
07 Mar 2020
Distributed Training of Deep Neural Network Acoustic Models for
  Automatic Speech Recognition
Distributed Training of Deep Neural Network Acoustic Models for Automatic Speech Recognition
Xiaodong Cui
Wei Zhang
Ulrich Finkler
G. Saon
M. Picheny
David S. Kung
27
19
0
24 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
32
61
0
20 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
22
0
0
01 Feb 2020
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated
  Edge Learning: Design and Convergence Analysis
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis
Guangxu Zhu
Yuqing Du
Deniz Gunduz
Kaibin Huang
44
308
0
16 Jan 2020
MG-WFBP: Merging Gradients Wisely for Efficient Communication in
  Distributed Deep Learning
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
Shaoshuai Shi
Xiaowen Chu
Bo Li
FedML
28
25
0
18 Dec 2019
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual
  Algorithm for High-Dimensional Data Mining
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
Zhouyuan Huo
Heng-Chiao Huang
FedML
19
5
0
09 Oct 2019
SlowMo: Improving Communication-Efficient Distributed SGD with Slow
  Momentum
SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum
Jianyu Wang
Vinayak Tantia
Nicolas Ballas
Michael G. Rabbat
19
200
0
01 Oct 2019
Communication-Efficient Distributed Learning via Lazily Aggregated
  Quantized Gradients
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
30
93
0
17 Sep 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
21
22
0
10 Sep 2019
SWALP : Stochastic Weight Averaging in Low-Precision Training
SWALP : Stochastic Weight Averaging in Low-Precision Training
Guandao Yang
Tianyi Zhang
Polina Kirichenko
Junwen Bai
A. Wilson
Christopher De Sa
24
94
0
26 Apr 2019
Gradient Coding with Clustering and Multi-message Communication
Gradient Coding with Clustering and Multi-message Communication
Emre Ozfatura
Deniz Gunduz
S. Ulukus
26
38
0
05 Mar 2019
On Maintaining Linear Convergence of Distributed Learning and
  Optimization under Limited Communication
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
Sindri Magnússon
H. S. Ghadikolaei
Na Li
27
81
0
26 Feb 2019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Sai Praneeth Karimireddy
Quentin Rebjock
Sebastian U. Stich
Martin Jaggi
27
493
0
28 Jan 2019
Wireless Network Intelligence at the Edge
Wireless Network Intelligence at the Edge
Jihong Park
S. Samarakoon
M. Bennis
Mérouane Debbah
23
518
0
07 Dec 2018
Collaborative Deep Learning Across Multiple Data Centers
Collaborative Deep Learning Across Multiple Data Centers
Kele Xu
Haibo Mi
Dawei Feng
Huaimin Wang
Chuan Chen
Zibin Zheng
Xu Lan
FedML
161
18
0
16 Oct 2018
cpSGD: Communication-efficient and differentially-private distributed
  SGD
cpSGD: Communication-efficient and differentially-private distributed SGD
Naman Agarwal
A. Suresh
Felix X. Yu
Sanjiv Kumar
H. B. McMahan
FedML
28
486
0
27 May 2018
Double Quantization for Communication-Efficient Distributed Optimization
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
Local SGD Converges Fast and Communicates Little
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
85
1,047
0
24 May 2018
Sparse Binary Compression: Towards Distributed Deep Learning with
  minimal Communication
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
36
212
0
22 May 2018
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
15
522
0
26 Oct 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
43
984
0
22 May 2017
Previous
123