ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10988
  4. Cited By
Natural Compression for Distributed Deep Learning
v1v2v3 (latest)

Natural Compression for Distributed Deep Learning

27 May 2019
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
ArXiv (abs)PDFHTML

Papers citing "Natural Compression for Distributed Deep Learning"

42 / 92 papers shown
Title
FL_PyTorch: optimization research simulator for federated learning
FL_PyTorch: optimization research simulator for federated learning
Konstantin Burlachenko
Samuel Horváth
Peter Richtárik
FedML
108
18
0
07 Feb 2022
DASHA: Distributed Nonconvex Optimization with Communication
  Compression, Optimal Oracle Complexity, and No Client Synchronization
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization
Alexander Tyurin
Peter Richtárik
119
19
0
02 Feb 2022
TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates
  into Gradients from Proxy Data
TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates into Gradients from Proxy Data
Isha Garg
M. Nagaraj
Kaushik Roy
FedML
84
1
0
21 Jan 2022
Faster Rates for Compressed Federated Learning with Client-Variance
  Reduction
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
FedML
108
13
0
24 Dec 2021
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
83
15
0
01 Nov 2021
Distributed Methods with Compressed Communication for Solving
  Variational Inequalities, with Theoretical Guarantees
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
Aleksandr Beznosikov
Peter Richtárik
Michael Diskin
Max Ryabinin
Alexander Gasnikov
FedML
96
22
0
07 Oct 2021
EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback
EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
129
47
0
07 Oct 2021
Communication-Efficient Federated Learning with Binary Neural Networks
Communication-Efficient Federated Learning with Binary Neural Networks
YuZhi Yang
Zhaoyang Zhang
Qianqian Yang
FedML
73
33
0
05 Oct 2021
Comfetch: Federated Learning of Large Networks on Constrained Clients
  via Sketching
Comfetch: Federated Learning of Large Networks on Constrained Clients via Sketching
Tahseen Rabbani
Brandon Yushan Feng
Marco Bornstein
Kyle Rui Sang
Yifan Yang
Arjun Rajkumar
A. Varshney
Furong Huang
FedML
119
2
0
17 Sep 2021
CANITA: Faster Rates for Distributed Convex Optimization with
  Communication Compression
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
Zhize Li
Peter Richtárik
70
30
0
20 Jul 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
279
422
0
14 Jul 2021
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error
  Feedback
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
69
146
0
09 Jun 2021
Compressed Communication for Distributed Training: Adaptive Methods and
  System
Compressed Communication for Distributed Training: Adaptive Methods and System
Yuchen Zhong
Cong Xie
Shuai Zheng
Yanghua Peng
74
9
0
17 May 2021
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Ali Ramezani-Kebrya
Fartash Faghri
Ilya Markov
V. Aksenov
Dan Alistarh
Daniel M. Roy
MQ
108
33
0
28 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
139
419
0
05 Apr 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
133
35
0
04 Mar 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
107
47
0
28 Feb 2021
FjORD: Fair and Accurate Federated Learning under heterogeneous targets
  with Ordered Dropout
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Samuel Horváth
Stefanos Laskaridis
Mario Almeida
Ilias Leondiadis
Stylianos I. Venieris
Nicholas D. Lane
304
275
0
26 Feb 2021
IntSGD: Adaptive Floatless Compression of Stochastic Gradients
IntSGD: Adaptive Floatless Compression of Stochastic Gradients
Konstantin Mishchenko
Bokun Wang
D. Kovalev
Peter Richtárik
100
15
0
16 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
102
110
0
15 Feb 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication
  Compression Techniques for Distributed Optimization
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
M. Safaryan
Filip Hanzely
Peter Richtárik
42
24
0
14 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
86
51
0
14 Feb 2021
Time-Correlated Sparsification for Communication-Efficient Federated
  Learning
Time-Correlated Sparsification for Communication-Efficient Federated Learning
Emre Ozfatura
Kerem Ozfatura
Deniz Gunduz
FedML
89
49
0
21 Jan 2021
FedNS: Improving Federated Learning for collaborative image
  classification on mobile clients
FedNS: Improving Federated Learning for collaborative image classification on mobile clients
Yaoxin Zhuo
Baoxin Li
FedML
62
14
0
20 Jan 2021
Distributed Sparse SGD with Majority Voting
Distributed Sparse SGD with Majority Voting
Kerem Ozfatura
Emre Ozfatura
Deniz Gunduz
FedML
76
4
0
12 Nov 2020
Optimal Client Sampling for Federated Learning
Optimal Client Sampling for Federated Learning
Jiajun He
Samuel Horváth
Peter Richtárik
FedML
89
201
0
26 Oct 2020
Adaptive Gradient Quantization for Data-Parallel SGD
Adaptive Gradient Quantization for Data-Parallel SGD
Fartash Faghri
Iman Tabrizian
I. Markov
Dan Alistarh
Daniel M. Roy
Ali Ramezani-Kebrya
MQ
63
83
0
23 Oct 2020
Optimal Gradient Compression for Distributed and Federated Learning
Optimal Gradient Compression for Distributed and Federated Learning
Alyazeed Albasyoni
M. Safaryan
Laurent Condat
Peter Richtárik
FedML
67
64
0
07 Oct 2020
Federated Learning with Communication Delay in Edge Networks
Federated Learning with Communication Delay in Edge Networks
F. Lin
Christopher G. Brinton
Nicolò Michelusi
FedML
72
16
0
21 Aug 2020
Communication-Efficient Federated Learning via Optimal Client Sampling
Communication-Efficient Federated Learning via Optimal Client Sampling
Mónica Ribero
H. Vikalo
FedML
87
95
0
30 Jul 2020
A Better Alternative to Error Feedback for Communication-Efficient
  Distributed Learning
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
79
60
0
19 Jun 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex
  Federated Optimization
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
93
36
0
12 Jun 2020
UVeQFed: Universal Vector Quantization for Federated Learning
UVeQFed: Universal Vector Quantization for Federated Learning
Nir Shlezinger
Mingzhe Chen
Yonina C. Eldar
H. Vincent Poor
Shuguang Cui
FedMLMQ
65
231
0
05 Jun 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
Shaoshuai Shi
Wei Wang
Yue Liu
Xiaowen Chu
83
49
0
10 Mar 2020
On Biased Compression for Distributed Learning
On Biased Compression for Distributed Learning
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
M. Safaryan
78
189
0
27 Feb 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedMLAI4CE
126
137
0
26 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
110
61
0
20 Feb 2020
Towards Crowdsourced Training of Large Neural Networks using
  Decentralized Mixture-of-Experts
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts
Max Ryabinin
Anton I. Gusev
FedML
82
52
0
10 Feb 2020
Differentially Quantized Gradient Methods
Differentially Quantized Gradient Methods
Chung-Yi Lin
V. Kostina
B. Hassibi
MQ
66
8
0
06 Feb 2020
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedMLAI4CE
292
6,336
0
10 Dec 2019
On the Discrepancy between the Theoretical Analysis and Practical
  Implementations of Compressed Communication for Distributed Deep Learning
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
77
78
0
19 Nov 2019
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized
  Machine Learning
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning
Anis Elgabli
Jihong Park
Amrit Singh Bedi
Chaouki Ben Issaid
M. Bennis
Vaneet Aggarwal
132
67
0
23 Oct 2019
Previous
12