ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07878
  4. Cited By
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning

TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

22 May 2017
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning"

50 / 467 papers shown
Title
Caramel: Accelerating Decentralized Distributed Deep Learning with
  Computation Scheduling
Caramel: Accelerating Decentralized Distributed Deep Learning with Computation Scheduling
Sayed Hadi Hashemi
Sangeetha Abdu Jyothi
Brighten Godfrey
R. Campbell
25
2
0
29 Apr 2020
Memory-efficient training with streaming dimensionality reduction
Memory-efficient training with streaming dimensionality reduction
Siyuan Huang
Brian D. Hoskins
M. Daniels
M. D. Stiles
G. Adam
6
3
0
25 Apr 2020
Communication Efficient Federated Learning with Energy Awareness over
  Wireless Networks
Communication Efficient Federated Learning with Energy Awareness over Wireless Networks
Richeng Jin
Xiaofan He
H. Dai
41
25
0
15 Apr 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
41
9
0
11 Apr 2020
Dithered backprop: A sparse and quantized backpropagation algorithm for
  more efficient deep neural network training
Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training
Simon Wiedemann
Temesgen Mehari
Kevin Kepp
Wojciech Samek
32
18
0
09 Apr 2020
Evaluating the Communication Efficiency in Federated Learning Algorithms
Evaluating the Communication Efficiency in Federated Learning Algorithms
Muhammad Asad
Ahmed Moustafa
Takayuki Ito
M. Aslam
FedML
22
51
0
06 Apr 2020
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Sicong Zhuang
Cristiano Malossi
Marc Casas
27
0
0
05 Apr 2020
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to
  Balance Communication Overhead, Computational Complexity, and Convergence
  Rate
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to Balance Communication Overhead, Computational Complexity, and Convergence Rate
Naeimeh Omidvar
M. Maddah-ali
Hamed Mahdavi
ODL
25
3
0
27 Mar 2020
Dynamic Sampling and Selective Masking for Communication-Efficient
  Federated Learning
Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
Shaoxiong Ji
Wenqi Jiang
A. Walid
Xue Li
FedML
28
66
0
21 Mar 2020
A flexible framework for communication-efficient machine learning: from
  HPC to IoT
A flexible framework for communication-efficient machine learning: from HPC to IoT
Sarit Khirirat
Sindri Magnússon
Arda Aytekin
M. Johansson
19
7
0
13 Mar 2020
Communication-efficient Variance-reduced Stochastic Gradient Descent
Communication-efficient Variance-reduced Stochastic Gradient Descent
H. S. Ghadikolaei
Sindri Magnússon
22
3
0
10 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
Shaoshuai Shi
Wei Wang
Bo Li
Xuming Hu
31
48
0
10 Mar 2020
Ternary Compression for Communication-Efficient Federated Learning
Ternary Compression for Communication-Efficient Federated Learning
Jinjin Xu
W. Du
Ran Cheng
Wangli He
Yaochu Jin
MQ
FedML
47
174
0
07 Mar 2020
ShadowSync: Performing Synchronization in the Background for Highly
  Scalable Distributed Training
ShadowSync: Performing Synchronization in the Background for Highly Scalable Distributed Training
Qinqing Zheng
Bor-Yiing Su
Jiyan Yang
A. Azzolini
Qiang Wu
Ou Jin
S. Karandikar
Hagay Lupesko
Liang Xiong
Eric Zhou
3DH
FedML
GNN
9
8
0
07 Mar 2020
Trends and Advancements in Deep Neural Network Communication
Trends and Advancements in Deep Neural Network Communication
Felix Sattler
Thomas Wiegand
Wojciech Samek
GNN
33
9
0
06 Mar 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
On Biased Compression for Distributed Learning
On Biased Compression for Distributed Learning
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
M. Safaryan
10
186
0
27 Feb 2020
Disentangling Adaptive Gradient Methods from Learning Rates
Disentangling Adaptive Gradient Methods from Learning Rates
Naman Agarwal
Rohan Anil
Elad Hazan
Tomer Koren
Cyril Zhang
27
34
0
26 Feb 2020
Moniqua: Modulo Quantized Communication in Decentralized SGD
Moniqua: Modulo Quantized Communication in Decentralized SGD
Yucheng Lu
Christopher De Sa
MQ
32
50
0
26 Feb 2020
LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient
  Distributed Learning
LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient Distributed Learning
Tianyi Chen
Yuejiao Sun
W. Yin
FedML
22
14
0
26 Feb 2020
Optimal Gradient Quantization Condition for Communication-Efficient
  Distributed Training
Optimal Gradient Quantization Condition for Communication-Efficient Distributed Training
An Xu
Zhouyuan Huo
Heng-Chiao Huang
MQ
16
6
0
25 Feb 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
27
62
0
25 Feb 2020
Communication-Efficient Decentralized Learning with Sparsification and
  Adaptive Peer Selection
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
Shaoshuai Shi
Xiaowen Chu
FedML
21
57
0
22 Feb 2020
New Bounds For Distributed Mean Estimation and Variance Reduction
New Bounds For Distributed Mean Estimation and Variance Reduction
Peter Davies
Vijaykrishna Gurunathan
Niusha Moshrefi
Saleh Ashkboos
Dan Alistarh
FedML
15
2
0
21 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
32
61
0
20 Feb 2020
Differentially Quantized Gradient Methods
Differentially Quantized Gradient Methods
Chung-Yi Lin
V. Kostina
B. Hassibi
MQ
30
7
0
06 Feb 2020
Brainstorming Generative Adversarial Networks (BGANs): Towards
  Multi-Agent Generative Models with Distributed Private Datasets
Brainstorming Generative Adversarial Networks (BGANs): Towards Multi-Agent Generative Models with Distributed Private Datasets
A. Ferdowsi
Walid Saad
AI4CE
7
17
0
02 Feb 2020
D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless
  Network Edge
D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge
Xiaoran Cai
Xiaopeng Mo
Junyang Chen
Jie Xu
11
26
0
28 Jan 2020
Communication Efficient Federated Learning over Multiple Access Channels
Communication Efficient Federated Learning over Multiple Access Channels
Wei-Ting Chang
Ravi Tandon
FedML
23
44
0
23 Jan 2020
Intermittent Pulling with Local Compensation for Communication-Efficient
  Federated Learning
Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning
Yining Qi
Zhihao Qu
Song Guo
Xin Gao
Ruixuan Li
Baoliu Ye
FedML
18
8
0
22 Jan 2020
A Federated Deep Learning Framework for Privacy Preservation and
  Communication Efficiency
A Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency
Tien-Dung Cao
Tram Truong-Huu
H. Tran
K. Tran
FedML
17
27
0
22 Jan 2020
Elastic Consistency: A General Consistency Model for Distributed
  Stochastic Gradient Descent
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent
Giorgi Nadiradze
Ilia Markov
Bapi Chatterjee
Vyacheslav Kungurtsev
Dan Alistarh
FedML
22
14
0
16 Jan 2020
Distributed Learning in the Non-Convex World: From Batch to Streaming
  Data, and Beyond
Distributed Learning in the Non-Convex World: From Batch to Streaming Data, and Beyond
Tsung-Hui Chang
Mingyi Hong
Hoi-To Wai
Xinwei Zhang
Songtao Lu
GNN
31
13
0
14 Jan 2020
Distributed Fixed Point Methods with Compressed Iterates
Distributed Fixed Point Methods with Compressed Iterates
Sélim Chraibi
Ahmed Khaled
D. Kovalev
Peter Richtárik
Adil Salim
Martin Takávc
FedML
19
16
0
20 Dec 2019
MG-WFBP: Merging Gradients Wisely for Efficient Communication in
  Distributed Deep Learning
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
Shaoshuai Shi
Xiaowen Chu
Bo Li
FedML
28
25
0
18 Dec 2019
Parallel Restarted SPIDER -- Communication Efficient Distributed
  Nonconvex Optimization with Optimal Computation Complexity
Parallel Restarted SPIDER -- Communication Efficient Distributed Nonconvex Optimization with Optimal Computation Complexity
Pranay Sharma
Swatantra Kafle
Prashant Khanduri
Saikiran Bulusu
K. Rajawat
P. Varshney
FedML
31
17
0
12 Dec 2019
Communication-Efficient Network-Distributed Optimization with
  Differential-Coded Compressors
Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors
Xin Zhang
Jia-Wei Liu
Zhengyuan Zhu
Elizabeth S. Bentley
11
7
0
06 Dec 2019
Communication-Efficient and Byzantine-Robust Distributed Learning with
  Error Feedback
Communication-Efficient and Byzantine-Robust Distributed Learning with Error Feedback
Avishek Ghosh
R. Maity
S. Kadhe
A. Mazumdar
Kannan Ramchandran
FedML
21
27
0
21 Nov 2019
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with
  Adaptive Learning Rates
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates
Cong Xie
Oluwasanmi Koyejo
Indranil Gupta
Yanghua Peng
26
41
0
20 Nov 2019
Auto-Precision Scaling for Distributed Deep Learning
Auto-Precision Scaling for Distributed Deep Learning
Ruobing Han
J. Demmel
Yang You
21
5
0
20 Nov 2019
Understanding Top-k Sparsification in Distributed Deep Learning
Understanding Top-k Sparsification in Distributed Deep Learning
Shaoshuai Shi
Xiaowen Chu
Ka Chun Cheung
Simon See
30
95
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep
  Learning with Convergence Guarantees
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
Shaoshuai Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
19
22
0
20 Nov 2019
On the Discrepancy between the Theoretical Analysis and Practical
  Implementations of Compressed Communication for Distributed Deep Learning
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
33
77
0
19 Nov 2019
vqSGD: Vector Quantized Stochastic Gradient Descent
vqSGD: Vector Quantized Stochastic Gradient Descent
V. Gandikota
Daniel Kane
R. Maity
A. Mazumdar
MQ
30
4
0
18 Nov 2019
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated
  Learning
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning
XINYAN DAI
Xiao Yan
Kaiwen Zhou
Han Yang
K. K. Ng
James Cheng
Yu Fan
FedML
27
47
0
12 Nov 2019
MindTheStep-AsyncPSGD: Adaptive Asynchronous Parallel Stochastic
  Gradient Descent
MindTheStep-AsyncPSGD: Adaptive Asynchronous Parallel Stochastic Gradient Descent
Karl Bäckström
Marina Papatriantafilou
P. Tsigas
28
11
0
08 Nov 2019
On-Device Machine Learning: An Algorithms and Learning Theory
  Perspective
On-Device Machine Learning: An Algorithms and Learning Theory Perspective
Sauptik Dhar
Junyao Guo
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
33
141
0
02 Nov 2019
Progressive Compressed Records: Taking a Byte out of Deep Learning Data
Progressive Compressed Records: Taking a Byte out of Deep Learning Data
Michael Kuchnik
George Amvrosiadis
Virginia Smith
19
9
0
01 Nov 2019
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized
  Stochastic Optimization
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
29
23
0
31 Oct 2019
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive
  Synchronization
Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Farzin Haddadpour
Mohammad Mahdi Kamani
M. Mahdavi
V. Cadambe
FedML
33
199
0
30 Oct 2019
Previous
123...106789
Next