ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.10505
  4. Cited By
The Convergence of Sparsified Gradient Methods

The Convergence of Sparsified Gradient Methods

27 September 2018
Dan Alistarh
Torsten Hoefler
M. Johansson
Sarit Khirirat
Nikola Konstantinov
Cédric Renggli
ArXivPDFHTML

Papers citing "The Convergence of Sparsified Gradient Methods"

50 / 125 papers shown
Title
Federated Random Reshuffling with Compression and Variance Reduction
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
29
10
0
08 May 2022
Communication-Efficient Adaptive Federated Learning
Communication-Efficient Adaptive Federated Learning
Yujia Wang
Lu Lin
Jinghui Chen
FedML
27
71
0
05 May 2022
Efficient Convex Optimization Requires Superlinear Memory
Efficient Convex Optimization Requires Superlinear Memory
A. Marsden
Vatsal Sharan
Aaron Sidford
Gregory Valiant
29
14
0
29 Mar 2022
Convert, compress, correct: Three steps toward communication-efficient
  DNN training
Convert, compress, correct: Three steps toward communication-efficient DNN training
Zhongzhu Chen
Eduin E. Hernandez
Yu-Chih Huang
Stefano Rini
28
0
0
17 Mar 2022
Linear Stochastic Bandits over a Bit-Constrained Channel
Linear Stochastic Bandits over a Bit-Constrained Channel
A. Mitra
Hamed Hassani
George J. Pappas
47
8
0
02 Mar 2022
Survey on Large Scale Neural Network Training
Survey on Large Scale Neural Network Training
Julia Gusak
Daria Cherniuk
Alena Shilova
A. Katrutsa
Daniel Bershatsky
...
Lionel Eyraud-Dubois
Oleg Shlyazhko
Denis Dimitrov
Ivan Oseledets
Olivier Beaumont
41
10
0
21 Feb 2022
Distributed Learning With Sparsified Gradient Differences
Distributed Learning With Sparsified Gradient Differences
Yicheng Chen
Rick S. Blum
Martin Takáč
Brian M. Sadler
37
15
0
05 Feb 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
37
49
0
31 Jan 2022
Decentralized Multi-Task Stochastic Optimization With Compressed
  Communications
Decentralized Multi-Task Stochastic Optimization With Compressed Communications
Navjot Singh
Xuanyu Cao
Suhas Diggavi
Tamer Basar
32
9
0
23 Dec 2021
Communication-Efficient Distributed SGD with Compressed Sensing
Communication-Efficient Distributed SGD with Compressed Sensing
Yujie Tang
V. Ramanathan
Junshan Zhang
Na Li
FedML
33
8
0
15 Dec 2021
FastSGD: A Fast Compressed SGD Framework for Distributed Machine
  Learning
FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning
Keyu Yang
Lu Chen
Zhihao Zeng
Yunjun Gao
28
9
0
08 Dec 2021
Communication-Efficient Distributed Learning via Sparse and Adaptive
  Stochastic Gradient
Communication-Efficient Distributed Learning via Sparse and Adaptive Stochastic Gradient
Xiaoge Deng
Dongsheng Li
Tao Sun
Xicheng Lu
FedML
26
0
0
08 Dec 2021
Collaborative Learning over Wireless Networks: An Introductory Overview
Collaborative Learning over Wireless Networks: An Introductory Overview
Emre Ozfatura
Deniz Gunduz
H. Vincent Poor
30
11
0
07 Dec 2021
Wyner-Ziv Gradient Compression for Federated Learning
Wyner-Ziv Gradient Compression for Federated Learning
Kai Liang
Huiru Zhong
Haoning Chen
Youlong Wu
FedML
29
8
0
16 Nov 2021
DNN gradient lossless compression: Can GenNorm be the answer?
DNN gradient lossless compression: Can GenNorm be the answer?
Zhongzhu Chen
Eduin E. Hernandez
Yu-Chih Huang
Stefano Rini
36
9
0
15 Nov 2021
Federated Expectation Maximization with heterogeneity mitigation and
  variance reduction
Federated Expectation Maximization with heterogeneity mitigation and variance reduction
Aymeric Dieuleveut
G. Fort
Eric Moulines
Geneviève Robin
FedML
31
5
0
03 Nov 2021
Basis Matters: Better Communication-Efficient Second Order Methods for
  Federated Learning
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
24
23
0
02 Nov 2021
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
32
14
0
01 Nov 2021
Leveraging Spatial and Temporal Correlations in Sparsified Mean
  Estimation
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala
Ankur Mallick
Advait Gadhikar
S. Kadhe
Gauri Joshi
24
10
0
14 Oct 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
51
46
0
07 Oct 2021
Fundamental limits of over-the-air optimization: Are analog schemes
  optimal?
Fundamental limits of over-the-air optimization: Are analog schemes optimal?
Shubham K. Jha
Prathamesh Mayekar
Himanshu Tyagi
29
7
0
11 Sep 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for
  Federated Learning
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
46
46
0
19 Aug 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
50
56
0
02 Aug 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
31
30
0
03 Jul 2021
Escaping Saddle Points with Compressed SGD
Escaping Saddle Points with Compressed SGD
Dmitrii Avdiukhin
G. Yaroslavtsev
22
4
0
21 May 2021
Towards Demystifying Serverless Machine Learning Training
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
19
122
0
17 May 2021
Slashing Communication Traffic in Federated Learning by Transmitting
  Clustered Model Updates
Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Yi Pan
FedML
38
36
0
10 May 2021
ScaleCom: Scalable Sparsified Gradient Compression for
  Communication-Efficient Distributed Training
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
29
0
0
21 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
45
402
0
05 Apr 2021
DataLens: Scalable Privacy Preserving Training via Gradient Compression
  and Aggregation
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Wei Ping
Fan Wu
Yunhui Long
Luka Rimanic
Ce Zhang
Bo-wen Li
FedML
45
63
0
20 Mar 2021
Learned Gradient Compression for Distributed Deep Learning
Learned Gradient Compression for Distributed Deep Learning
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
40
46
0
16 Mar 2021
Efficient Randomized Subspace Embeddings for Distributed Optimization
  under a Communication Budget
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget
R. Saha
Mert Pilanci
Andrea J. Goldsmith
34
5
0
13 Mar 2021
EventGraD: Event-Triggered Communication in Parallel Machine Learning
EventGraD: Event-Triggered Communication in Parallel Machine Learning
Soumyadip Ghosh
B. Aquino
V. Gupta
FedML
26
8
0
12 Mar 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
41
46
0
28 Feb 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
32
2
0
26 Feb 2021
Federated Learning over Wireless Networks: A Band-limited Coordinated
  Descent Approach
Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach
Junshan Zhang
Na Li
M. Dedeoglu
FedML
31
41
0
16 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Time-Correlated Sparsification for Communication-Efficient Federated
  Learning
Time-Correlated Sparsification for Communication-Efficient Federated Learning
Emre Ozfatura
Kerem Ozfatura
Deniz Gunduz
FedML
43
47
0
21 Jan 2021
Bayesian Federated Learning over Wireless Networks
Bayesian Federated Learning over Wireless Networks
Seunghoon Lee
Chanhoo Park
Songnam Hong
Yonina C. Eldar
Namyoon Lee
36
23
0
31 Dec 2020
CADA: Communication-Adaptive Distributed Adam
CADA: Communication-Adaptive Distributed Adam
Tianyi Chen
Ziye Guo
Yuejiao Sun
W. Yin
ODL
14
24
0
31 Dec 2020
Faster Non-Convex Federated Learning via Global and Local Momentum
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
40
82
0
07 Dec 2020
A Reputation Mechanism Is All You Need: Collaborative Fairness and
  Adversarial Robustness in Federated Learning
A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
Xinyi Xu
Lingjuan Lyu
FedML
31
69
0
20 Nov 2020
A Linearly Convergent Algorithm for Decentralized Optimization: Sending
  Less Bits for Free!
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
D. Kovalev
Anastasia Koloskova
Martin Jaggi
Peter Richtárik
Sebastian U. Stich
31
73
0
03 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
37
109
0
03 Nov 2020
Sparse Communication for Training Deep Networks
Sparse Communication for Training Deep Networks
Negar Foroutan
Martin Jaggi
FedML
30
16
0
19 Sep 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
53
23
0
04 Sep 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
On the Convergence of SGD with Biased Gradients
On the Convergence of SGD with Biased Gradients
Ahmad Ajalloeian
Sebastian U. Stich
6
84
0
31 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
42
274
0
02 Jul 2020
Previous
123
Next