ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02367
  4. Cited By
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

6 June 2019
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
    MQ
ArXivPDFHTML

Papers citing "Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations"

39 / 89 papers shown
Title
Communication-Efficient Distributed Learning via Sparse and Adaptive
  Stochastic Gradient
Communication-Efficient Distributed Learning via Sparse and Adaptive Stochastic Gradient
Xiaoge Deng
Dongsheng Li
Tao Sun
Xicheng Lu
FedML
26
0
0
08 Dec 2021
Collaborative Learning over Wireless Networks: An Introductory Overview
Collaborative Learning over Wireless Networks: An Introductory Overview
Emre Ozfatura
Deniz Gunduz
H. Vincent Poor
30
11
0
07 Dec 2021
What Do We Mean by Generalization in Federated Learning?
What Do We Mean by Generalization in Federated Learning?
Honglin Yuan
Warren Morningstar
Lin Ning
K. Singhal
OOD
FedML
41
71
0
27 Oct 2021
Leveraging Spatial and Temporal Correlations in Sparsified Mean
  Estimation
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala
Ankur Mallick
Advait Gadhikar
S. Kadhe
Gauri Joshi
24
10
0
14 Oct 2021
Federated Learning via Plurality Vote
Federated Learning via Plurality Vote
Kai Yue
Richeng Jin
Chau-Wai Wong
H. Dai
FedML
24
8
0
06 Oct 2021
Fundamental limits of over-the-air optimization: Are analog schemes
  optimal?
Fundamental limits of over-the-air optimization: Are analog schemes optimal?
Shubham K. Jha
Prathamesh Mayekar
Himanshu Tyagi
24
7
0
11 Sep 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
32
9
0
04 Aug 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
45
56
0
02 Aug 2021
QuPeD: Quantized Personalization via Distillation with Applications to
  Federated Learning
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
Kaan Ozkara
Navjot Singh
Deepesh Data
Suhas Diggavi
FedML
MQ
24
56
0
29 Jul 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
25
30
0
03 Jul 2021
On Large-Cohort Training for Federated Learning
On Large-Cohort Training for Federated Learning
Zachary B. Charles
Zachary Garrett
Zhouyuan Huo
Sergei Shmulyian
Virginia Smith
FedML
21
113
0
15 Jun 2021
Fast Federated Learning in the Presence of Arbitrary Device
  Unavailability
Fast Federated Learning in the Presence of Arbitrary Device Unavailability
Xinran Gu
Kaixuan Huang
Jingzhao Zhang
Longbo Huang
FedML
35
95
0
08 Jun 2021
Communication-Efficient Federated Learning with Dual-Side Low-Rank
  Compression
Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression
Zhefeng Qiao
Xianghao Yu
Jun Zhang
Khaled B. Letaief
FedML
41
19
0
26 Apr 2021
Learned Gradient Compression for Distributed Deep Learning
Learned Gradient Compression for Distributed Deep Learning
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
37
45
0
16 Mar 2021
EventGraD: Event-Triggered Communication in Parallel Machine Learning
EventGraD: Event-Triggered Communication in Parallel Machine Learning
Soumyadip Ghosh
B. Aquino
V. Gupta
FedML
21
8
0
12 Mar 2021
Convergence and Accuracy Trade-Offs in Federated Learning and
  Meta-Learning
Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning
Zachary B. Charles
Jakub Konecný
FedML
26
63
0
08 Mar 2021
Personalized Federated Learning using Hypernetworks
Personalized Federated Learning using Hypernetworks
Aviv Shamsian
Aviv Navon
Ethan Fetaya
Gal Chechik
FedML
41
324
0
08 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
32
0
04 Mar 2021
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
  Convergence and Power Transfer
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer
Qunsong Zeng
Yuqing Du
Kaibin Huang
37
35
0
24 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
39
109
0
15 Feb 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's
  Convergence Speed
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
45
84
0
04 Feb 2021
Federated Learning over Wireless Device-to-Device Networks: Algorithms
  and Convergence Analysis
Federated Learning over Wireless Device-to-Device Networks: Algorithms and Convergence Analysis
Hong Xing
Osvaldo Simeone
Suzhi Bi
45
92
0
29 Jan 2021
Faster Non-Convex Federated Learning via Global and Local Momentum
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
37
82
0
07 Dec 2020
On the Benefits of Multiple Gossip Steps in Communication-Constrained
  Decentralized Optimization
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
Abolfazl Hashemi
Anish Acharya
Rudrajit Das
H. Vikalo
Sujay Sanghavi
Inderjit Dhillon
20
7
0
20 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
35
109
0
03 Nov 2020
Optimal Client Sampling for Federated Learning
Optimal Client Sampling for Federated Learning
Wenlin Chen
Samuel Horváth
Peter Richtárik
FedML
42
191
0
26 Oct 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated
  Optimization
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMe
FedML
16
1,297
0
15 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
42
273
0
02 Jul 2020
Federated Accelerated Stochastic Gradient Descent
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
25
171
0
16 Jun 2020
rTop-k: A Statistical Estimation Approach to Distributed SGD
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
32
65
0
21 May 2020
Communication-Efficient Distributed Stochastic AUC Maximization with
  Deep Neural Networks
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
33
42
0
05 May 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
39
9
0
11 Apr 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
41
493
0
23 Mar 2020
Adaptive Federated Optimization
Adaptive Federated Optimization
Sashank J. Reddi
Zachary B. Charles
Manzil Zaheer
Zachary Garrett
Keith Rush
Jakub Konecný
Sanjiv Kumar
H. B. McMahan
FedML
31
1,391
0
29 Feb 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
22
63
0
25 Feb 2020
Personalized Federated Learning: A Meta-Learning Approach
Personalized Federated Learning: A Meta-Learning Approach
Alireza Fallah
Aryan Mokhtari
Asuman Ozdaglar
FedML
36
561
0
19 Feb 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An
  Online Learning Approach
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
35
175
0
14 Jan 2020
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
21
22
0
10 Sep 2019
Previous
12