ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07389
  4. Cited By
3LC: Lightweight and Effective Traffic Compression for Distributed
  Machine Learning

3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning

21 February 2018
Hyeontaek Lim
D. Andersen
M. Kaminsky
ArXivPDFHTML

Papers citing "3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning"

11 / 11 papers shown
Title
Communication-Efficient Large-Scale Distributed Deep Learning: A
  Comprehensive Survey
Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey
Feng Liang
Zhen Zhang
Haifeng Lu
Victor C. M. Leung
Yanyi Guo
Xiping Hu
GNN
39
6
0
09 Apr 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
47
4
0
10 Jan 2024
Enhancing Efficiency in Multidevice Federated Learning through Data Selection
Enhancing Efficiency in Multidevice Federated Learning through Data Selection
Fan Mo
Mohammad Malekzadeh
S. Chatterjee
F. Kawsar
Akhil Mathur
FedML
40
2
0
08 Nov 2022
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and
  Accurate Deep Learning
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep Learning
Mohammadreza Alimohammadi
I. Markov
Elias Frantar
Dan Alistarh
35
5
0
31 Oct 2022
Optimizing the Communication-Accuracy Trade-off in Federated Learning
  with Rate-Distortion Theory
Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory
Nicole Mitchell
Johannes Ballé
Zachary B. Charles
Jakub Konecný
FedML
19
21
0
07 Jan 2022
FastSGD: A Fast Compressed SGD Framework for Distributed Machine
  Learning
FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning
Keyu Yang
Lu Chen
Zhihao Zeng
Yunjun Gao
28
9
0
08 Dec 2021
Is Network the Bottleneck of Distributed Training?
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Yanghua Peng
Yida Wang
R. Arora
Xin Jin
25
70
0
17 Jun 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
On the Discrepancy between the Theoretical Analysis and Practical
  Implementations of Compressed Communication for Distributed Deep Learning
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
33
77
0
19 Nov 2019
Taming Momentum in a Distributed Asynchronous Environment
Taming Momentum in a Distributed Asynchronous Environment
Ido Hakimi
Saar Barkai
Moshe Gabel
Assaf Schuster
19
23
0
26 Jul 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
1