Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09269
Cited By
Distributed Learning with Compressed Gradient Differences
26 January 2019
Konstantin Mishchenko
Eduard A. Gorbunov
Martin Takáč
Peter Richtárik
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distributed Learning with Compressed Gradient Differences"
44 / 44 papers shown
Title
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
51
0
0
11 Mar 2025
Communication-efficient Vertical Federated Learning via Compressed Error Feedback
Pedro Valdeira
João Xavier
Cláudia Soares
Yuejie Chi
FedML
45
4
0
20 Jun 2024
Inexact subgradient methods for semialgebraic functions
Jérôme Bolte
Tam Le
Éric Moulines
Edouard Pauwels
60
2
0
30 Apr 2024
SignSGD with Federated Voting
Chanho Park
H. Vincent Poor
Namyoon Lee
FedML
40
1
0
25 Mar 2024
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
47
4
0
07 Mar 2024
Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method
Elissa Mhanna
Mohamad Assaad
49
1
0
30 Jan 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
44
4
0
10 Jan 2024
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
37
3
0
13 Dec 2023
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
45
5
0
15 Oct 2023
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
21
12
0
17 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
31
7
0
12 May 2023
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression
Avetik G. Karagulyan
Peter Richtárik
FedML
34
6
0
08 Mar 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
29
10
0
15 Feb 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
35
9
0
14 Jan 2023
On the effectiveness of partial variance reduction in federated learning with heterogeneous data
Bo-wen Li
Mikkel N. Schmidt
T. S. Alstrøm
Sebastian U. Stich
FedML
37
9
0
05 Dec 2022
Coresets for Vertical Federated Learning: Regularized Linear Regression and
K
K
K
-Means Clustering
Lingxiao Huang
Zhize Li
Jialin Sun
Haoyu Zhao
FedML
44
9
0
26 Oct 2022
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Mengzhe Ruan
Guangfeng Yan
Yuanzhang Xiao
Linqi Song
Weitao Xu
40
3
0
24 Oct 2022
Label driven Knowledge Distillation for Federated Learning with non-IID Data
Minh-Duong Nguyen
Viet Quoc Pham
D. Hoang
Long Tran-Thanh
Diep N. Nguyen
W. Hwang
24
2
0
29 Sep 2022
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev
Grigory Malinovsky
Eduard A. Gorbunov
Igor Sokolov
Ahmed Khaled
Konstantin Burlachenko
Peter Richtárik
FedML
16
21
0
14 Jun 2022
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
27
10
0
08 May 2022
Linear Stochastic Bandits over a Bit-Constrained Channel
A. Mitra
Hamed Hassani
George J. Pappas
39
8
0
02 Mar 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
48
0
15 Feb 2022
FL_PyTorch: optimization research simulator for federated learning
Konstantin Burlachenko
Samuel Horváth
Peter Richtárik
FedML
48
18
0
07 Feb 2022
BEER: Fast
O
(
1
/
T
)
O(1/T)
O
(
1/
T
)
Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
29
48
0
31 Jan 2022
Federated Expectation Maximization with heterogeneity mitigation and variance reduction
Aymeric Dieuleveut
G. Fort
Eric Moulines
Geneviève Robin
FedML
31
5
0
03 Nov 2021
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
24
23
0
02 Nov 2021
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala
Ankur Mallick
Advait Gadhikar
S. Kadhe
Gauri Joshi
24
10
0
14 Oct 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
46
46
0
19 Aug 2021
Decentralized Composite Optimization with Compression
Yao Li
Xiaorui Liu
Jiliang Tang
Ming Yan
Kun Yuan
27
9
0
10 Aug 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
32
9
0
04 Aug 2021
Secure Distributed Training at Scale
Eduard A. Gorbunov
Alexander Borzunov
Michael Diskin
Max Ryabinin
FedML
26
15
0
21 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
33
77
0
05 Jun 2021
Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Yi Pan
FedML
38
36
0
10 May 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
32
0
04 Mar 2021
IntSGD: Adaptive Floatless Compression of Stochastic Gradients
Konstantin Mishchenko
Bokun Wang
D. Kovalev
Peter Richtárik
75
14
0
16 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
39
109
0
15 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
32
51
0
14 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
35
109
0
03 Nov 2020
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
50
23
0
04 Sep 2020
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
25
171
0
16 Jun 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
39
9
0
11 Apr 2020
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
21
22
0
10 Sep 2019
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
1