ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07878
  4. Cited By
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning

TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

22 May 2017
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning"

50 / 467 papers shown
Title
E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Yue Wang
Ziyu Jiang
Xiaohan Chen
Pengfei Xu
Yang Katie Zhao
Yingyan Lin
Zhangyang Wang
MQ
29
83
0
29 Oct 2019
Gradient Sparification for Asynchronous Distributed Training
Gradient Sparification for Asynchronous Distributed Training
Zijie Yan
FedML
11
1
0
24 Oct 2019
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized
  Machine Learning
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning
Anis Elgabli
Jihong Park
Amrit Singh Bedi
Chaouki Ben Issaid
M. Bennis
Vaneet Aggarwal
24
67
0
23 Oct 2019
Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD
Sparsification as a Remedy for Staleness in Distributed Asynchronous SGD
Rosa Candela
Giulio Franzese
Maurizio Filippone
Pietro Michiardi
18
1
0
21 Oct 2019
A Double Residual Compression Algorithm for Efficient Distributed
  Learning
A Double Residual Compression Algorithm for Efficient Distributed Learning
Xiaorui Liu
Yao Li
Jiliang Tang
Ming Yan
24
49
0
16 Oct 2019
Election Coding for Distributed Learning: Protecting SignSGD against
  Byzantine Attacks
Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks
Jy-yong Sohn
Dong-Jun Han
Beongjun Choi
Jaekyun Moon
FedML
21
36
0
14 Oct 2019
JSDoop and TensorFlow.js: Volunteer Distributed Web Browser-Based Neural
  Network Training
JSDoop and TensorFlow.js: Volunteer Distributed Web Browser-Based Neural Network Training
José Á. Morell
Andrés Camero
Enrique Alba
29
9
0
12 Oct 2019
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual
  Algorithm for High-Dimensional Data Mining
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
Zhouyuan Huo
Heng-Chiao Huang
FedML
19
5
0
09 Oct 2019
Distributed Learning of Deep Neural Networks using Independent Subnet
  Training
Distributed Learning of Deep Neural Networks using Independent Subnet Training
John Shelton Hyatt
Cameron R. Wolfe
Michael Lee
Yuxin Tang
Anastasios Kyrillidis
Christopher M. Jermaine
OOD
29
35
0
04 Oct 2019
SlowMo: Improving Communication-Efficient Distributed SGD with Slow
  Momentum
SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum
Jianyu Wang
Vinayak Tantia
Nicolas Ballas
Michael G. Rabbat
17
200
0
01 Oct 2019
Gap Aware Mitigation of Gradient Staleness
Gap Aware Mitigation of Gradient Staleness
Saar Barkai
Ido Hakimi
Assaf Schuster
17
23
0
24 Sep 2019
Communication-Efficient Distributed Learning via Lazily Aggregated
  Quantized Gradients
Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
30
93
0
17 Sep 2019
The Error-Feedback Framework: Better Rates for SGD with Delayed
  Gradients and Compressed Communication
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
21
22
0
10 Sep 2019
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Beyond Human-Level Accuracy: Computational Challenges in Deep Learning
Joel Hestness
Newsha Ardalani
G. Diamos
21
66
0
03 Sep 2019
An End-to-End Encrypted Neural Network for Gradient Updates Transmission
  in Federated Learning
An End-to-End Encrypted Neural Network for Gradient Updates Transmission in Federated Learning
Hongyu Li
Tianqi Han
FedML
27
32
0
22 Aug 2019
RATQ: A Universal Fixed-Length Quantizer for Stochastic Optimization
RATQ: A Universal Fixed-Length Quantizer for Stochastic Optimization
Prathamesh Mayekar
Himanshu Tyagi
MQ
35
48
0
22 Aug 2019
Accelerated CNN Training Through Gradient Approximation
Accelerated CNN Training Through Gradient Approximation
Ziheng Wang
Sree Harsha Nelaturu
176
5
0
15 Aug 2019
Accelerating CNN Training by Pruning Activation Gradients
Accelerating CNN Training by Pruning Activation Gradients
Xucheng Ye
Pengcheng Dai
Junyu Luo
Xin Guo
Weisheng Zhao
Jianlei Yang
Yiran Chen
11
2
0
01 Aug 2019
Taming Momentum in a Distributed Asynchronous Environment
Taming Momentum in a Distributed Asynchronous Environment
Ido Hakimi
Saar Barkai
Moshe Gabel
Assaf Schuster
19
23
0
26 Jul 2019
Federated Learning over Wireless Fading Channels
Federated Learning over Wireless Fading Channels
M. Amiri
Deniz Gunduz
33
508
0
23 Jul 2019
Decentralized Deep Learning with Arbitrary Communication Compression
Decentralized Deep Learning with Arbitrary Communication Compression
Anastasia Koloskova
Tao R. Lin
Sebastian U. Stich
Martin Jaggi
FedML
28
233
0
22 Jul 2019
signADAM: Learning Confidences for Deep Neural Networks
signADAM: Learning Confidences for Deep Neural Networks
Dong Wang
Yicheng Liu
Wenwo Tang
Fanhua Shang
Hongying Liu
Qigong Sun
Licheng Jiao
ODL
FedML
16
1
0
21 Jul 2019
$\texttt{DeepSqueeze}$: Decentralization Meets Error-Compensated
  Compression
DeepSqueeze\texttt{DeepSqueeze}DeepSqueeze: Decentralization Meets Error-Compensated Compression
Hanlin Tang
Xiangru Lian
Shuang Qiu
Lei Yuan
Ce Zhang
Tong Zhang
Liu
14
49
0
17 Jul 2019
QUOTIENT: Two-Party Secure Neural Network Training and Prediction
QUOTIENT: Two-Party Secure Neural Network Training and Prediction
Nitin Agrawal
Ali Shahin Shamsabadi
Matt J. Kusner
Adria Gascon
30
212
0
08 Jul 2019
Faster Distributed Deep Net Training: Computation and Communication
  Decoupled Stochastic Gradient Descent
Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Shuheng Shen
Linli Xu
Jingchang Liu
Xianfeng Liang
Yifei Cheng
ODL
FedML
29
24
0
28 Jun 2019
Database Meets Deep Learning: Challenges and Opportunities
Database Meets Deep Learning: Challenges and Opportunities
Wei Wang
Meihui Zhang
Gang Chen
H. V. Jagadish
Beng Chin Ooi
K. Tan
21
147
0
21 Jun 2019
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
MQ
24
402
0
06 Jun 2019
Distributed Training with Heterogeneous Data: Bridging Median- and
  Mean-Based Algorithms
Distributed Training with Heterogeneous Data: Bridging Median- and Mean-Based Algorithms
Xiangyi Chen
Tiancong Chen
Haoran Sun
Zhiwei Steven Wu
Mingyi Hong
FedML
24
73
0
04 Jun 2019
PowerSGD: Practical Low-Rank Gradient Compression for Distributed
  Optimization
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
19
317
0
31 May 2019
Global Momentum Compression for Sparse Communication in Distributed
  Learning
Global Momentum Compression for Sparse Communication in Distributed Learning
Chang-Wei Shi
Shen-Yi Zhao
Yin-Peng Xie
Hao Gao
Wu-Jun Li
35
1
0
30 May 2019
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
16
27
0
29 May 2019
Accelerated Sparsified SGD with Error Feedback
Accelerated Sparsified SGD with Error Feedback
Tomoya Murata
Taiji Suzuki
22
2
0
29 May 2019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and
  Coordinate Descent
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
25
143
0
27 May 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
151
0
27 May 2019
Communication-Efficient Distributed Blockwise Momentum SGD with
  Error-Feedback
Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback
Shuai Zheng
Ziyue Huang
James T. Kwok
16
114
0
27 May 2019
Decentralized Learning of Generative Adversarial Networks from Non-iid
  Data
Decentralized Learning of Generative Adversarial Networks from Non-iid Data
Ryo Yonetani
Tomohiro Takahashi
Atsushi Hashimoto
Yoshitaka Ushiku
45
24
0
23 May 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass
  Error-Compensated Compression
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
11
217
0
15 May 2019
Priority-based Parameter Propagation for Distributed DNN Training
Priority-based Parameter Propagation for Distributed DNN Training
Anand Jayarajan
Jinliang Wei
Garth A. Gibson
Alexandra Fedorova
Gennady Pekhimenko
AI4CE
22
178
0
10 May 2019
On the Linear Speedup Analysis of Communication Efficient Momentum SGD
  for Distributed Non-Convex Optimization
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization
Hao Yu
Rong Jin
Sen Yang
FedML
49
380
0
09 May 2019
Communication trade-offs for synchronized distributed SGD with large
  step size
Communication trade-offs for synchronized distributed SGD with large step size
Kumar Kshitij Patel
Aymeric Dieuleveut
FedML
30
27
0
25 Apr 2019
Distributed Deep Learning Strategies For Automatic Speech Recognition
Distributed Deep Learning Strategies For Automatic Speech Recognition
Wei Zhang
Xiaodong Cui
Ulrich Finkler
Brian Kingsbury
G. Saon
David S. Kung
M. Picheny
21
29
0
10 Apr 2019
Nested Dithered Quantization for Communication Reduction in Distributed
  Training
Nested Dithered Quantization for Communication Reduction in Distributed Training
Afshin Abdi
Faramarz Fekri
MQ
6
16
0
02 Apr 2019
Scalable Deep Learning on Distributed Infrastructures: Challenges,
  Techniques and Tools
Scalable Deep Learning on Distributed Infrastructures: Challenges, Techniques and Tools
R. Mayer
Hans-Arno Jacobsen
GNN
29
186
0
27 Mar 2019
Communication-efficient distributed SGD with Sketching
Communication-efficient distributed SGD with Sketching
Nikita Ivkin
D. Rothchild
Enayat Ullah
Vladimir Braverman
Ion Stoica
R. Arora
FedML
22
198
0
12 Mar 2019
Robust and Communication-Efficient Federated Learning from Non-IID Data
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
24
1,337
0
07 Mar 2019
Speeding up Deep Learning with Transient Servers
Speeding up Deep Learning with Transient Servers
Shijian Li
R. Walls
Lijie Xu
Tian Guo
30
12
0
28 Feb 2019
On Maintaining Linear Convergence of Distributed Learning and
  Optimization under Limited Communication
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
Sindri Magnússon
H. S. Ghadikolaei
Na Li
27
81
0
26 Feb 2019
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and
  Projection Free
Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free
Mingrui Zhang
Lin Chen
Aryan Mokhtari
Hamed Hassani
Amin Karbasi
16
8
0
17 Feb 2019
Previous
123...10789
Next