ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.10452
  4. Cited By
Shifted Compression Framework: Generalizations and Improvements

Shifted Compression Framework: Generalizations and Improvements

21 June 2022
Egor Shulgin
Peter Richtárik
ArXivPDFHTML

Papers citing "Shifted Compression Framework: Generalizations and Improvements"

18 / 18 papers shown
Title
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
Laurent Condat
Peter Richtárik
49
19
0
06 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
69
80
0
05 Jun 2021
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
205
79
0
23 Oct 2020
PowerGossip: Practical Low-Rank Communication Compression in
  Decentralized Deep Learning
PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
FedML
50
54
0
04 Aug 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
102
137
0
26 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
65
61
0
20 Feb 2020
Distributed Fixed Point Methods with Compressed Iterates
Distributed Fixed Point Methods with Compressed Iterates
Sélim Chraibi
Ahmed Khaled
D. Kovalev
Peter Richtárik
Adil Salim
Martin Takávc
FedML
47
17
0
20 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
256
6,261
0
10 Dec 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
28
22
0
10 Sep 2019
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
MQ
60
405
0
06 Jun 2019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and
  Coordinate Descent
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
98
146
0
27 May 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
65
152
0
27 May 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
98
156
0
24 Jan 2019
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural
  Network Training
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training
Liang Luo
Jacob Nelson
Luis Ceze
Amar Phanishayee
Arvind Krishnamurthy
106
120
0
21 May 2018
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
87
526
0
26 Oct 2017
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal
Piotr Dollár
Ross B. Girshick
P. Noordhuis
Lukasz Wesolowski
Aapo Kyrola
Andrew Tulloch
Yangqing Jia
Kaiming He
3DH
126
3,681
0
08 Jun 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
140
989
0
22 May 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
306
4,646
0
18 Oct 2016
1