Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.08054
Cited By
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
21 June 2018
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization"
42 / 42 papers shown
Title
γ
γ
γ
-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning
Rongwei Lu
Yutong Jiang
Jinrui Zhang
Chunyang Li
Yifei Zhu
Bin Chen
Zhi Wang
FedML
7
0
0
18 May 2025
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
37
0
0
11 Nov 2024
Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework
Siyuan Yu
Wei Chen
H. V. Poor
32
0
0
17 Jun 2024
Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey
Feng Liang
Zhen Zhang
Haifeng Lu
Victor C. M. Leung
Yanyi Guo
Xiping Hu
GNN
37
6
0
09 Apr 2024
RS-DGC: Exploring Neighborhood Statistics for Dynamic Gradient Compression on Remote Sensing Image Interpretation
Weiying Xie
Zixuan Wang
Jitao Ma
Daixun Li
Yunsong Li
30
0
0
29 Dec 2023
Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat
Eduard A. Gorbunov
Samuel Horváth
Rustem Islamov
Fakhri Karray
Peter Richtárik
34
10
0
30 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
34
7
0
12 May 2023
FedREP: A Byzantine-Robust, Communication-Efficient and Privacy-Preserving Framework for Federated Learning
Yi-Rui Yang
Kun Wang
Wulu Li
FedML
42
3
0
09 Mar 2023
GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning
Shiqi He
Qifan Yan
Feijie Wu
Lanjun Wang
Mathias Lécuyer
Ivan Beschastnikh
FedML
42
7
0
03 Dec 2022
Towards Practical Few-shot Federated NLP
Dongqi Cai
Yaozong Wu
Haitao Yuan
Shangguang Wang
F. Lin
Mengwei Xu
FedML
42
6
0
01 Dec 2022
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
34
4
0
25 Nov 2022
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Mengzhe Ruan
Guangfeng Yan
Yuanzhang Xiao
Linqi Song
Weitao Xu
40
3
0
24 Oct 2022
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
27
10
0
08 May 2022
Decentralized Multi-Task Stochastic Optimization With Compressed Communications
Navjot Singh
Xuanyu Cao
Suhas Diggavi
Tamer Basar
18
9
0
23 Dec 2021
Collaborative Learning over Wireless Networks: An Introductory Overview
Emre Ozfatura
Deniz Gunduz
H. Vincent Poor
30
11
0
07 Dec 2021
Distributed Adaptive Learning Under Communication Constraints
Marco Carpentiero
Vincenzo Matta
Ali H. Sayed
29
17
0
03 Dec 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
32
9
0
04 Aug 2021
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
19
121
0
17 May 2021
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
27
66
0
21 Apr 2021
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
31
46
0
28 Feb 2021
Federated Learning over Wireless Device-to-Device Networks: Algorithms and Convergence Analysis
Hong Xing
Osvaldo Simeone
Suzhi Bi
45
92
0
29 Jan 2021
Time-Correlated Sparsification for Communication-Efficient Federated Learning
Emre Ozfatura
Kerem Ozfatura
Deniz Gunduz
FedML
38
47
0
21 Jan 2021
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
37
82
0
07 Dec 2020
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
35
109
0
03 Nov 2020
A Survey on Large-scale Machine Learning
Meng Wang
Weijie Fu
Xiangnan He
Shijie Hao
Xindong Wu
19
109
0
10 Aug 2020
Communication-Efficient and Distributed Learning Over Wireless Networks: Principles and Applications
Jihong Park
S. Samarakoon
Anis Elgabli
Joongheon Kim
M. Bennis
Seong-Lyun Kim
Mérouane Debbah
34
161
0
06 Aug 2020
Accelerating Federated Learning over Reliability-Agnostic Clients in Mobile Edge Computing Systems
Wentai Wu
Ligang He
Weiwei Lin
Rui Mao
25
78
0
28 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
42
273
0
02 Jul 2020
Communication Efficient Federated Learning with Energy Awareness over Wireless Networks
Richeng Jin
Xiaofan He
H. Dai
36
25
0
15 Apr 2020
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
22
63
0
25 Feb 2020
Understanding Top-k Sparsification in Distributed Deep Learning
S. Shi
X. Chu
Ka Chun Cheung
Simon See
27
93
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
X. Chu
19
22
0
20 Nov 2019
On-Device Machine Learning: An Algorithms and Learning Theory Perspective
Sauptik Dhar
Junyao Guo
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
28
141
0
02 Nov 2019
High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning
Yuqing Du
Sheng Yang
Kaibin Huang
32
99
0
09 Oct 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
Sindri Magnússon
H. S. Ghadikolaei
Na Li
27
81
0
26 Feb 2019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Sai Praneeth Karimireddy
Quentin Rebjock
Sebastian U. Stich
Martin Jaggi
21
493
0
28 Jan 2019
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
40
134
0
14 Jan 2019
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning
Hyeontaek Lim
D. Andersen
M. Kaminsky
15
70
0
21 Feb 2018
1