ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.04180
  4. Cited By
EF-BV: A Unified Theory of Error Feedback and Variance Reduction
  Mechanisms for Biased and Unbiased Compression in Distributed Optimization
v1v2v3v4 (latest)

EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization

9 May 2022
Laurent Condat
Kai Yi
Peter Richtárik
ArXiv (abs)PDFHTML

Papers citing "EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization"

23 / 23 papers shown
Title
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
Artavazd Maranjyan
Peter Richtárik
123
5
0
07 Mar 2024
EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback
EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
115
47
0
07 Oct 2021
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
Laurent Condat
Peter Richtárik
52
19
0
06 Jun 2021
Innovation Compression for Communication-efficient Distributed
  Optimization with Linear Convergence
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence
Jiaqi Zhang
Keyou You
Lihua Xie
59
32
0
14 May 2021
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
209
78
0
23 Oct 2020
Variance-Reduced Methods for Machine Learning
Variance-Reduced Methods for Machine Learning
Robert Mansel Gower
Mark Schmidt
Francis R. Bach
Peter Richtárik
88
117
0
02 Oct 2020
Bidirectional compression in heterogeneous settings for distributed or
  federated learning with partial participation: tight convergence guarantees
Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees
Constantin Philippenko
Aymeric Dieuleveut
FedML
85
51
0
25 Jun 2020
rTop-k: A Statistical Estimation Approach to Distributed SGD
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
65
65
0
21 May 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedMLAI4CE
116
137
0
26 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
80
61
0
20 Feb 2020
A Survey on Distributed Machine Learning
A Survey on Distributed Machine Learning
Joost Verbraeken
Matthijs Wolting
Jonathan Katzy
Jeroen Kloppenburg
Tim Verbelen
Jan S. Rellermeyer
OOD
105
713
0
20 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedMLAI4CE
275
6,294
0
10 Dec 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
31
22
0
10 Sep 2019
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
MQ
67
406
0
06 Jun 2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and
  Many New Methods
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
Filip Hanzely
Peter Richtárik
88
27
0
27 May 2019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and
  Coordinate Descent
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
105
147
0
27 May 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
83
152
0
27 May 2019
Robust and Communication-Efficient Federated Learning from Non-IID Data
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
79
1,362
0
07 Mar 2019
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
100
528
0
26 Oct 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
184
990
0
22 May 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
312
4,657
0
18 Oct 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
280
1,221
0
16 Aug 2016
Parallel Coordinate Descent Methods for Big Data Optimization
Parallel Coordinate Descent Methods for Big Data Optimization
Peter Richtárik
Martin Takáč
130
487
0
04 Dec 2012
1