ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07878
  4. Cited By
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning

TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

22 May 2017
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning"

50 / 467 papers shown
Title
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify
  Communication-Efficient Federated Learning
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning
He Zhu
Qing Ling
FedML
AAML
24
11
0
14 Apr 2021
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training
  with LAMB's Convergence Speed
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
Conglong Li
A. A. Awan
Hanlin Tang
Samyam Rajbhandari
Yuxiong He
50
33
0
13 Apr 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
16
44
0
12 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
45
402
0
05 Apr 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient
  Distributed Training
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
16
4
0
28 Mar 2021
Compressed Gradient Tracking Methods for Decentralized Optimization with
  Linear Convergence
Compressed Gradient Tracking Methods for Decentralized Optimization with Linear Convergence
Yiwei Liao
Zhuoru Li
Kun-Yen Huang
Shi Pu
29
11
0
25 Mar 2021
Escaping Saddle Points in Distributed Newton's Method with Communication
  Efficiency and Byzantine Resilience
Escaping Saddle Points in Distributed Newton's Method with Communication Efficiency and Byzantine Resilience
Avishek Ghosh
R. Maity
A. Mazumdar
Kannan Ramchandran
FedML
22
5
0
17 Mar 2021
Ternary Hashing
Ternary Hashing
Chang Liu
Lixin Fan
Kam Woh Ng
Yilun Jin
Ce Ju
Tianyu Zhang
Chee Seng Chan
Qiang Yang
21
3
0
16 Mar 2021
Learned Gradient Compression for Distributed Deep Learning
Learned Gradient Compression for Distributed Deep Learning
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
40
46
0
16 Mar 2021
Efficient Randomized Subspace Embeddings for Distributed Optimization
  under a Communication Budget
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget
R. Saha
Mert Pilanci
Andrea J. Goldsmith
34
5
0
13 Mar 2021
Pufferfish: Communication-efficient Models At No Extra Cost
Pufferfish: Communication-efficient Models At No Extra Cost
Hongyi Wang
Saurabh Agarwal
Dimitris Papailiopoulos
19
56
0
05 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
42
33
0
04 Mar 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
41
46
0
28 Feb 2021
Constrained Differentially Private Federated Learning for Low-bandwidth
  Devices
Constrained Differentially Private Federated Learning for Low-bandwidth Devices
Raouf Kerkouche
G. Ács
C. Castelluccia
P. Genevès
29
7
0
27 Feb 2021
Noisy Truncated SGD: Optimization and Generalization
Noisy Truncated SGD: Optimization and Generalization
Yingxue Zhou
Xinyan Li
A. Banerjee
19
3
0
26 Feb 2021
Preserved central model for faster bidirectional compression in
  distributed settings
Preserved central model for faster bidirectional compression in distributed settings
Constantin Philippenko
Aymeric Dieuleveut
27
30
0
24 Feb 2021
Peering Beyond the Gradient Veil with Distributed Auto Differentiation
Peering Beyond the Gradient Veil with Distributed Auto Differentiation
Bradley T. Baker
Aashis Khanal
Vince D. Calhoun
Barak A. Pearlmutter
Sergey Plis
23
1
0
18 Feb 2021
Federated Learning over Wireless Networks: A Band-limited Coordinated
  Descent Approach
Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach
Junshan Zhang
Na Li
M. Dedeoglu
FedML
31
41
0
16 Feb 2021
Distributed Online Learning for Joint Regret with Communication
  Constraints
Distributed Online Learning for Joint Regret with Communication Constraints
Dirk van der Hoeven
Hédi Hadiji
T. Erven
27
5
0
15 Feb 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication
  Compression Techniques for Distributed Optimization
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
M. Safaryan
Filip Hanzely
Peter Richtárik
30
24
0
14 Feb 2021
Task-oriented Communication Design in Cyber-Physical Systems: A Survey
  on Theory and Applications
Task-oriented Communication Design in Cyber-Physical Systems: A Survey on Theory and Applications
Arsham Mostaani
T. Vu
Shree Krishna Sharma
Van-Dinh Nguyen
Qi Liao
Symeon Chatzinotas
27
16
0
14 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
34
51
0
14 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Consensus Based Multi-Layer Perceptrons for Edge Computing
Consensus Based Multi-Layer Perceptrons for Edge Computing
Haimonti Dutta
N. Nataraj
S. Mahindre
16
1
0
09 Feb 2021
Adaptive Quantization of Model Updates for Communication-Efficient
  Federated Learning
Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning
Divyansh Jhunjhunwala
Advait Gadhikar
Gauri Joshi
Yonina C. Eldar
FedML
MQ
24
108
0
08 Feb 2021
Enabling Binary Neural Network Training on the Edge
Enabling Binary Neural Network Training on the Edge
Erwei Wang
James J. Davis
Daniele Moro
Piotr Zielinski
Jia Jie Lim
C. Coelho
S. Chatterjee
P. Cheung
George A. Constantinides
MQ
25
24
0
08 Feb 2021
DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep
  Learning
DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning
Kelly Kostopoulou
Hang Xu
Aritra Dutta
Xin Li
A. Ntoulas
Panos Kalnis
24
7
0
05 Feb 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's
  Convergence Speed
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
45
84
0
04 Feb 2021
Provably Secure Federated Learning against Malicious Clients
Provably Secure Federated Learning against Malicious Clients
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
FedML
21
134
0
03 Feb 2021
FEDZIP: A Compression Framework for Communication-Efficient Federated
  Learning
FEDZIP: A Compression Framework for Communication-Efficient Federated Learning
Amirhossein Malekijoo
Mohammad Javad Fadaeieslam
Hanieh Malekijou
Morteza Homayounfar
F. Alizadeh-Shabdiz
Reza Rawassizadeh
FedML
34
54
0
02 Feb 2021
Differential Privacy Meets Federated Learning under Communication
  Constraints
Differential Privacy Meets Federated Learning under Communication Constraints
Nima Mohammadi
Jianan Bai
Q. Fan
Yifei Song
Yuhao Yi
Lingjia Liu
FedML
22
28
0
28 Jan 2021
An Efficient Statistical-based Gradient Compression Technique for
  Distributed Training Systems
An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems
A. Abdelmoniem
Ahmed Elzanaty
Mohamed-Slim Alouini
Marco Canini
63
75
0
26 Jan 2021
Time-Correlated Sparsification for Communication-Efficient Federated
  Learning
Time-Correlated Sparsification for Communication-Efficient Federated Learning
Emre Ozfatura
Kerem Ozfatura
Deniz Gunduz
FedML
43
47
0
21 Jan 2021
Sum-Rate-Distortion Function for Indirect Multiterminal Source Coding in
  Federated Learning
Sum-Rate-Distortion Function for Indirect Multiterminal Source Coding in Federated Learning
Naifu Zhang
M. Tao
Jia Wang
FedML
24
4
0
21 Jan 2021
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds
  through Dynamic Communication Scheduling
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling
Shangming Cai
Dongsheng Wang
Haixia Wang
Yongqiang Lyu
Guangquan Xu
Xi Zheng
A. Vasilakos
31
6
0
20 Jan 2021
CADA: Communication-Adaptive Distributed Adam
CADA: Communication-Adaptive Distributed Adam
Tianyi Chen
Ziye Guo
Yuejiao Sun
W. Yin
ODL
14
24
0
31 Dec 2020
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Y. Fu
Haoran You
Yang Katie Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
MQ
38
32
0
24 Dec 2020
Adaptive Precision Training for Resource Constrained Devices
Adaptive Precision Training for Resource Constrained Devices
Tian Huang
Yaoyu Zhang
Qiufeng Wang
38
5
0
23 Dec 2020
CosSGD: Communication-Efficient Federated Learning with a Simple
  Cosine-Based Quantization
CosSGD: Communication-Efficient Federated Learning with a Simple Cosine-Based Quantization
Yang He
Hui-Po Wang
M. Zenk
Mario Fritz
FedML
MQ
27
8
0
15 Dec 2020
Quantizing data for distributed learning
Quantizing data for distributed learning
Osama A. Hanna
Yahya H. Ezzeldin
Christina Fragouli
Suhas Diggavi
FedML
44
20
0
14 Dec 2020
Distributed Training of Graph Convolutional Networks using Subgraph
  Approximation
Distributed Training of Graph Convolutional Networks using Subgraph Approximation
Alexandra Angerd
Keshav Balasubramanian
M. Annavaram
GNN
31
8
0
09 Dec 2020
Gradient Sparsification Can Improve Performance of
  Differentially-Private Convex Machine Learning
Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning
F. Farokhi
33
4
0
30 Nov 2020
Distributed Additive Encryption and Quantization for Privacy Preserving
  Federated Deep Learning
Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning
Hangyu Zhu
Rui Wang
Yaochu Jin
K. Liang
Jianting Ning
FedML
35
46
0
25 Nov 2020
Wyner-Ziv Estimators for Distributed Mean Estimation with Side
  Information and Optimization
Wyner-Ziv Estimators for Distributed Mean Estimation with Side Information and Optimization
Prathamesh Mayekar
Shubham K. Jha
A. Suresh
Himanshu Tyagi
FedML
24
2
0
24 Nov 2020
Distributed Sparse SGD with Majority Voting
Distributed Sparse SGD with Majority Voting
Kerem Ozfatura
Emre Ozfatura
Deniz Gunduz
FedML
46
4
0
12 Nov 2020
Compression Boosts Differentially Private Federated Learning
Compression Boosts Differentially Private Federated Learning
Raouf Kerkouche
G. Ács
C. Castelluccia
P. Genevès
FedML
30
29
0
10 Nov 2020
A Linearly Convergent Algorithm for Decentralized Optimization: Sending
  Less Bits for Free!
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
D. Kovalev
Anastasia Koloskova
Martin Jaggi
Peter Richtárik
Sebastian U. Stich
31
73
0
03 Nov 2020
Accordion: Adaptive Gradient Communication via Critical Learning Regime
  Identification
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal
Hongyi Wang
Kangwook Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
34
25
0
29 Oct 2020
Optimal Client Sampling for Federated Learning
Optimal Client Sampling for Federated Learning
Wenlin Chen
Samuel Horváth
Peter Richtárik
FedML
47
192
0
26 Oct 2020
A Distributed Training Algorithm of Generative Adversarial Networks with
  Quantized Gradients
A Distributed Training Algorithm of Generative Adversarial Networks with Quantized Gradients
Xiaojun Chen
Shu Yang
Liyan Shen
Xuanrong Pang
17
4
0
26 Oct 2020
Previous
123456...8910
Next