ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.00998
  4. Cited By
3PC: Three Point Compressors for Communication-Efficient Distributed
  Training and a Better Theory for Lazy Aggregation

3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation

2 February 2022
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
Elnur Gasanov
Zhize Li
Eduard A. Gorbunov
ArXivPDFHTML

Papers citing "3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation"

12 / 12 papers shown
Title
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
42
0
0
11 Nov 2024
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
45
3
0
13 Dec 2023
Clip21: Error Feedback for Gradient Clipping
Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat
Eduard A. Gorbunov
Samuel Horváth
Rustem Islamov
Fakhri Karray
Peter Richtárik
42
10
0
30 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
36
7
0
12 May 2023
Adaptive Compression for Communication-Efficient Distributed Training
Adaptive Compression for Communication-Efficient Distributed Training
Maksim Makarenko
Elnur Gasanov
Rustem Islamov
Abdurakhmon Sadiev
Peter Richtárik
46
14
0
31 Oct 2022
Coresets for Vertical Federated Learning: Regularized Linear Regression
  and $K$-Means Clustering
Coresets for Vertical Federated Learning: Regularized Linear Regression and KKK-Means Clustering
Lingxiao Huang
Zhize Li
Jialin Sun
Haoyu Zhao
FedML
49
9
0
26 Oct 2022
Communication Acceleration of Local Gradient Methods via an Accelerated
  Primal-Dual Algorithm with Inexact Prox
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox
Abdurakhmon Sadiev
D. Kovalev
Peter Richtárik
35
20
0
08 Jul 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
40
16
0
07 Jun 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
37
49
0
31 Jan 2022
Permutation Compressors for Provably Faster Distributed Nonconvex
  Optimization
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization
Rafal Szlendak
A. Tyurin
Peter Richtárik
135
35
0
07 Oct 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
51
46
0
07 Oct 2021
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
78
0
23 Oct 2020
1