ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05203
  4. Cited By
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error
  Feedback

EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback

9 June 2021
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
ArXivPDFHTML

Papers citing "EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback"

36 / 36 papers shown
Title
$γ$-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning
γγγ-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning
Rongwei Lu
Yutong Jiang
Jinrui Zhang
Chunyang Li
Yifei Zhu
Bin Chen
Zhi Wang
FedML
7
0
0
18 May 2025
Convergence Analysis of Asynchronous Federated Learning with Gradient Compression for Non-Convex Optimization
Convergence Analysis of Asynchronous Federated Learning with Gradient Compression for Non-Convex Optimization
Diying Yang
Yingwei Hou
Danyang Xiao
Weigang Wu
FedML
39
0
0
28 Apr 2025
Striving for Simplicity: Simple Yet Effective Prior-Aware Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation
Striving for Simplicity: Simple Yet Effective Prior-Aware Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation
Yaxiong Chen
Yujie Wang
Zixuan Zheng
Jingliang Hu
Yilei Shi
Shengwu Xiong
Xiao Xiang Zhu
Lichao Mou
54
0
0
18 Mar 2025
Accelerated Distributed Optimization with Compression and Error Feedback
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
51
0
0
11 Mar 2025
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
37
0
0
11 Nov 2024
Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy
Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy
Wei Huo
Changxin Liu
Kemi Ding
Karl H. Johansson
Ling Shi
FedML
40
0
0
08 Aug 2024
Towards Federated Learning with On-device Training and Communication in
  8-bit Floating Point
Towards Federated Learning with On-device Training and Communication in 8-bit Floating Point
Bokun Wang
Axel Berg
D. A. E. Acar
Chuteng Zhou
FedML
MQ
41
0
0
02 Jul 2024
Communication-efficient Vertical Federated Learning via Compressed Error Feedback
Communication-efficient Vertical Federated Learning via Compressed Error Feedback
Pedro Valdeira
João Xavier
Cláudia Soares
Yuejie Chi
FedML
45
4
0
20 Jun 2024
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
47
4
0
07 Mar 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
44
4
0
10 Jan 2024
Sparse Training for Federated Learning with Regularized Error Correction
Sparse Training for Federated Learning with Regularized Error Correction
Ran Greidi
Kobi Cohen
FedML
30
1
0
21 Dec 2023
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
37
3
0
13 Dec 2023
Federated Learning is Better with Non-Homomorphic Encryption
Federated Learning is Better with Non-Homomorphic Encryption
Konstantin Burlachenko
Abdulmajeed Alrowithi
Fahad Ali Albalawi
Peter Richtárik
FedML
44
6
0
04 Dec 2023
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
45
5
0
15 Oct 2023
Asynchronous Federated Learning with Bidirectional Quantized
  Communications and Buffered Aggregation
Asynchronous Federated Learning with Bidirectional Quantized Communications and Buffered Aggregation
Tomàs Ortega
Hamid Jafarkhani
FedML
33
6
0
01 Aug 2023
Clip21: Error Feedback for Gradient Clipping
Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat
Eduard A. Gorbunov
Samuel Horváth
Rustem Islamov
Fakhri Karray
Peter Richtárik
32
10
0
30 May 2023
Error Feedback Shines when Features are Rare
Error Feedback Shines when Features are Rare
Peter Richtárik
Elnur Gasanov
Konstantin Burlachenko
36
2
0
24 May 2023
Convergence and Privacy of Decentralized Nonconvex Optimization with
  Gradient Clipping and Communication Compression
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
21
12
0
17 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
31
7
0
12 May 2023
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates
  and Practical Features
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov
David Dobre
Gauthier Gidel
25
5
0
23 Apr 2023
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional
  Compression
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression
Avetik G. Karagulyan
Peter Richtárik
FedML
34
6
0
08 Mar 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient
  Communications for Distributed Variational Inequalities
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
29
10
0
15 Feb 2023
DoCoFL: Downlink Compression for Cross-Device Federated Learning
DoCoFL: Downlink Compression for Cross-Device Federated Learning
Ron Dorfman
S. Vargaftik
Y. Ben-Itzhak
Kfir Y. Levy
FedML
32
19
0
01 Feb 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with
  Improved Convergence
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
35
9
0
14 Jan 2023
Analysis of Error Feedback in Federated Non-Convex Optimization with
  Biased Compression
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
34
4
0
25 Nov 2022
Adaptive Compression for Communication-Efficient Distributed Training
Adaptive Compression for Communication-Efficient Distributed Training
Maksim Makarenko
Elnur Gasanov
Rustem Islamov
Abdurakhmon Sadiev
Peter Richtárik
39
13
0
31 Oct 2022
Coresets for Vertical Federated Learning: Regularized Linear Regression
  and $K$-Means Clustering
Coresets for Vertical Federated Learning: Regularized Linear Regression and KKK-Means Clustering
Lingxiao Huang
Zhize Li
Jialin Sun
Haoyu Zhao
FedML
44
9
0
26 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
High Probability Bounds for Stochastic Subgradient Schemes with Heavy
  Tailed Noise
High Probability Bounds for Stochastic Subgradient Schemes with Heavy Tailed Noise
D. A. Parletta
Andrea Paudice
Massimiliano Pontil
Saverio Salzo
41
9
0
17 Aug 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
49
0
15 Feb 2022
FL_PyTorch: optimization research simulator for federated learning
FL_PyTorch: optimization research simulator for federated learning
Konstantin Burlachenko
Samuel Horváth
Peter Richtárik
FedML
48
18
0
07 Feb 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
29
48
0
31 Jan 2022
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for
  Federated Learning
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
46
46
0
19 Aug 2021
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
78
0
23 Oct 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
126
0
25 Aug 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
39
9
0
11 Apr 2020
1