Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1705.07878
Cited By
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
22 May 2017
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning"
50 / 467 papers shown
Title
Adaptive Gradient Quantization for Data-Parallel SGD
Fartash Faghri
Iman Tabrizian
I. Markov
Dan Alistarh
Daniel M. Roy
Ali Ramezani-Kebrya
MQ
10
81
0
23 Oct 2020
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
78
0
23 Oct 2020
Decentralized Deep Learning using Momentum-Accelerated Consensus
Aditya Balu
Zhanhong Jiang
Sin Yong Tan
Chinmay Hedge
Young M. Lee
Soumik Sarkar
FedML
32
22
0
21 Oct 2020
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
28
15
0
15 Oct 2020
Federated Learning in Adversarial Settings
Raouf Kerkouche
G. Ács
C. Castelluccia
FedML
21
15
0
15 Oct 2020
Optimal Gradient Compression for Distributed and Federated Learning
Alyazeed Albasyoni
M. Safaryan
Laurent Condat
Peter Richtárik
FedML
16
62
0
07 Oct 2020
How to send a real number using a single bit (and some shared randomness)
Ran Ben-Basat
Michael Mitzenmacher
S. Vargaftik
27
19
0
05 Oct 2020
Sparse Communication for Training Deep Networks
Negar Foroutan
Martin Jaggi
FedML
30
16
0
19 Sep 2020
Adversarial Robustness through Bias Variance Decomposition: A New Perspective for Federated Learning
Yao Zhou
Jun Wu
Haixun Wang
Jingrui He
AAML
FedML
36
26
0
18 Sep 2020
PSO-PS: Parameter Synchronization with Particle Swarm Optimization for Distributed Training of Deep Neural Networks
Qing Ye
Y. Han
Yanan Sun
Jiancheng Lv
28
3
0
06 Sep 2020
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
53
23
0
04 Sep 2020
ESMFL: Efficient and Secure Models for Federated Learning
Sheng Lin
Chenghong Wang
Hongjia Li
Jieren Deng
Yanzhi Wang
Caiwen Ding
FedML
27
5
0
03 Sep 2020
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Omar Mohamed Awad
Gennady Pekhimenko
Jorge Albericio
Andreas Moshovos
MoE
26
59
0
01 Sep 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
Hanlin Tang
Shaoduo Gan
Samyam Rajbhandari
Xiangru Lian
Ji Liu
Yuxiong He
Ce Zhang
25
8
0
26 Aug 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
23
25
0
24 Aug 2020
Adaptive Serverless Learning
Hongchang Gao
Heng-Chiao Huang
19
3
0
24 Aug 2020
Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs
Antonious M. Girgis
Deepesh Data
Suhas Diggavi
Peter Kairouz
A. Suresh
FedML
31
25
0
17 Aug 2020
Step-Ahead Error Feedback for Distributed Training with Compressed Gradient
An Xu
Zhouyuan Huo
Heng-Chiao Huang
18
14
0
13 Aug 2020
FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching
Farzin Haddadpour
Belhal Karimi
Ping Li
Xiaoyun Li
FedML
58
31
0
11 Aug 2020
A Survey on Large-scale Machine Learning
Meng Wang
Weijie Fu
Xiangnan He
Shijie Hao
Xindong Wu
25
110
0
10 Aug 2020
Communication-Efficient and Distributed Learning Over Wireless Networks: Principles and Applications
Jihong Park
S. Samarakoon
Anis Elgabli
Joongheon Kim
M. Bennis
Seong-Lyun Kim
Mérouane Debbah
39
161
0
06 Aug 2020
PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
FedML
11
54
0
04 Aug 2020
Efficient Sparse Secure Aggregation for Federated Learning
C. Béguier
M. Andreux
Eric W. Tramel
FedML
17
16
0
29 Jul 2020
CSER: Communication-efficient SGD with Error Reset
Cong Xie
Shuai Zheng
Oluwasanmi Koyejo
Indranil Gupta
Mu Li
Yanghua Peng
27
41
0
26 Jul 2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
22
11
0
23 Jul 2020
Breaking the Communication-Privacy-Accuracy Trilemma
Wei-Ning Chen
Peter Kairouz
Ayfer Özgür
14
116
0
22 Jul 2020
SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training
Pengcheng Dai
Jianlei Yang
Xucheng Ye
Xingzhou Cheng
Junyu Luo
Linghao Song
Yiran Chen
Weisheng Zhao
25
21
0
21 Jul 2020
Adaptive Periodic Averaging: A Practical Approach to Reducing Communication in Distributed Learning
Peng Jiang
G. Agrawal
35
5
0
13 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
42
274
0
02 Jul 2020
Shuffle-Exchange Brings Faster: Reduce the Idle Time During Communication for Decentralized Neural Network Training
Xiang Yang
FedML
18
2
0
01 Jul 2020
Linear Convergent Decentralized Optimization with Compression
Xiaorui Liu
Yao Li
Rongrong Wang
Jiliang Tang
Ming Yan
26
45
0
01 Jul 2020
DEED: A General Quantization Scheme for Communication Efficiency in Bits
Tian-Chun Ye
Peijun Xiao
Ruoyu Sun
FedML
MQ
36
2
0
19 Jun 2020
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
24
61
0
19 Jun 2020
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Yanghua Peng
Yida Wang
R. Arora
Xin Jin
25
70
0
17 Jun 2020
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
30
172
0
16 Jun 2020
Distributed Newton Can Communicate Less and Resist Byzantine Workers
Avishek Ghosh
R. Maity
A. Mazumdar
FedML
8
32
0
15 Jun 2020
O(1) Communication for Distributed SGD through Two-Level Gradient Averaging
Subhadeep Bhattacharya
Weikuan Yu
Fahim Chowdhury
FedML
12
2
0
12 Jun 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
39
36
0
12 Jun 2020
Daydream: Accurately Estimating the Efficacy of Optimizations for DNN Training
Hongyu Zhu
Amar Phanishayee
Gennady Pekhimenko
23
50
0
05 Jun 2020
UVeQFed: Universal Vector Quantization for Federated Learning
Nir Shlezinger
Mingzhe Chen
Yonina C. Eldar
H. Vincent Poor
Shuguang Cui
FedML
MQ
24
222
0
05 Jun 2020
DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging
Q. Zhou
Yawen Zhang
Pengcheng Li
Xiaoyong Liu
Jun Yang
Runsheng Wang
Ru Huang
FedML
36
2
0
31 May 2020
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
32
65
0
21 May 2020
Scaling-up Distributed Processing of Data Streams for Machine Learning
M. Nokleby
Haroon Raja
W. Bajwa
14
15
0
18 May 2020
Communication-Efficient Gradient Coding for Straggler Mitigation in Distributed Learning
S. Kadhe
O. O. Koyluoglu
Kannan Ramchandran
32
11
0
14 May 2020
OD-SGD: One-step Delay Stochastic Gradient Descent for Distributed Training
Yemao Xu
Dezun Dong
Weixia Xu
Xiangke Liao
6
7
0
14 May 2020
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
19
55
0
13 May 2020
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging
Shigang Li
Tal Ben-Nun
Giorgi Nadiradze
Salvatore Di Girolamo
Nikoli Dryden
Dan Alistarh
Torsten Hoefler
29
15
0
30 Apr 2020
Distributed Stochastic Nonconvex Optimization and Learning based on Successive Convex Approximation
P. Lorenzo
Simone Scardapane
51
2
0
30 Apr 2020
Quantized Adam with Error Feedback
Congliang Chen
Li Shen
Haozhi Huang
Wei Liu
ODL
MQ
8
33
0
29 Apr 2020
Previous
1
2
3
...
10
5
6
7
8
9
Next