ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.11304
  4. Cited By
Finite-Bit Quantization For Distributed Algorithms With Linear
  Convergence

Finite-Bit Quantization For Distributed Algorithms With Linear Convergence

23 July 2021
Nicolò Michelusi
G. Scutari
Chang-Shen Lee
    MQ
ArXivPDFHTML

Papers citing "Finite-Bit Quantization For Distributed Algorithms With Linear Convergence"

27 / 27 papers shown
Title
Innovation Compression for Communication-efficient Distributed
  Optimization with Linear Convergence
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence
Jiaqi Zhang
Keyou You
Lihua Xie
47
32
0
14 May 2021
Compressed Gradient Tracking Methods for Decentralized Optimization with
  Linear Convergence
Compressed Gradient Tracking Methods for Decentralized Optimization with Linear Convergence
Yiwei Liao
Zhuoru Li
Kun-Yen Huang
Shi Pu
53
11
0
25 Mar 2021
A Linearly Convergent Algorithm for Decentralized Optimization: Sending
  Less Bits for Free!
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
D. Kovalev
Anastasia Koloskova
Martin Jaggi
Peter Richtárik
Sebastian U. Stich
62
75
0
03 Nov 2020
Towards Tight Communication Lower Bounds for Distributed Optimisation
Towards Tight Communication Lower Bounds for Distributed Optimisation
Dan Alistarh
Janne H. Korhonen
FedML
35
8
0
16 Oct 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
76
23
0
04 Sep 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
69
276
0
02 Jul 2020
Linear Convergent Decentralized Optimization with Compression
Linear Convergent Decentralized Optimization with Compression
Xiaorui Liu
Yao Li
Rongrong Wang
Jiliang Tang
Ming Yan
41
46
0
01 Jul 2020
On Biased Compression for Distributed Learning
On Biased Compression for Distributed Learning
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
M. Safaryan
50
189
0
27 Feb 2020
Distributed Algorithms for Composite Optimization: Unified Framework and
  Convergence Analysis
Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis
Jinming Xu
Ye Tian
Ying Sun
G. Scutari
59
64
0
25 Feb 2020
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and
  Coordinate Descent
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
96
146
0
27 May 2019
Communication-Efficient Distributed Blockwise Momentum SGD with
  Error-Feedback
Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback
Shuai Zheng
Ziyue Huang
James T. Kwok
54
115
0
27 May 2019
On Maintaining Linear Convergence of Distributed Learning and
  Optimization under Limited Communication
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
Sindri Magnússon
H. S. Ghadikolaei
Na Li
41
81
0
26 Feb 2019
Decentralized Stochastic Optimization and Gossip Algorithms with
  Compressed Communication
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
FedML
77
509
0
01 Feb 2019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Sai Praneeth Karimireddy
Quentin Rebjock
Sebastian U. Stich
Martin Jaggi
54
502
0
28 Jan 2019
Compressed Distributed Gradient Descent: Communication-Efficient
  Consensus over Networks
Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks
Xin Zhang
Jia Liu
Zhengyuan Zhu
Elizabeth S. Bentley
40
27
0
10 Dec 2018
The Convergence of Sparsified Gradient Methods
The Convergence of Sparsified Gradient Methods
Dan Alistarh
Torsten Hoefler
M. Johansson
Sarit Khirirat
Nikola Konstantinov
Cédric Renggli
145
492
0
27 Sep 2018
Sparsified SGD with Memory
Sparsified SGD with Memory
Sebastian U. Stich
Jean-Baptiste Cordonnier
Martin Jaggi
71
749
0
20 Sep 2018
Limited Rate Distributed Weight-Balancing and Average Consensus Over
  Digraphs
Limited Rate Distributed Weight-Balancing and Average Consensus Over Digraphs
Chang-Shen Lee
Nicolò Michelusi
G. Scutari
46
8
0
17 Sep 2018
Distributed Nonconvex Constrained Optimization over Time-Varying
  Digraphs
Distributed Nonconvex Constrained Optimization over Time-Varying Digraphs
G. Scutari
Ying Sun
82
174
0
04 Sep 2018
A Dual Approach for Optimal Algorithms in Distributed Optimization over
  Networks
A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks
César A. Uribe
Soomin Lee
Alexander Gasnikov
A. Nedić
46
137
0
03 Sep 2018
Communication Compression for Decentralized Training
Communication Compression for Decentralized Training
Hanlin Tang
Shaoduo Gan
Ce Zhang
Tong Zhang
Ji Liu
53
273
0
17 Mar 2018
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case
  Study for Decentralized Parallel Stochastic Gradient Descent
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Ce Zhang
Huan Zhang
Cho-Jui Hsieh
Wei Zhang
Ji Liu
48
1,226
0
25 May 2017
A decentralized proximal-gradient method with network independent
  step-sizes and separated convergence rates
A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates
Zhi Li
W. Shi
Ming Yan
49
226
0
25 Apr 2017
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
Dan Alistarh
Demjan Grubic
Jerry Li
Ryota Tomioka
Milan Vojnović
MQ
64
424
0
07 Oct 2016
NEXT: In-Network Nonconvex Optimization
NEXT: In-Network Nonconvex Optimization
P. Lorenzo
G. Scutari
91
508
0
01 Feb 2016
Distributed Parameter Estimation with Quantized Communication via
  Running Average
Distributed Parameter Estimation with Quantized Communication via Running Average
Shanying Zhu
Y. Soh
Lihua Xie
50
24
0
23 Dec 2014
Distributed Consensus Algorithms in Sensor Networks: Quantized Data and
  Random Link Failures
Distributed Consensus Algorithms in Sensor Networks: Quantized Data and Random Link Failures
S. Kar
José M. F. Moura
88
430
0
10 Dec 2007
1