ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07878
  4. Cited By
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning

TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

22 May 2017
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning"

50 / 467 papers shown
Title
Optimal Rate Adaption in Federated Learning with Compressed
  Communications
Optimal Rate Adaption in Federated Learning with Compressed Communications
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Jiangchuan Liu
FedML
42
40
0
13 Dec 2021
FastSGD: A Fast Compressed SGD Framework for Distributed Machine
  Learning
FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning
Keyu Yang
Lu Chen
Zhihao Zeng
Yunjun Gao
28
9
0
08 Dec 2021
Efficient Batch Homomorphic Encryption for Vertically Federated XGBoost
Efficient Batch Homomorphic Encryption for Vertically Federated XGBoost
Wuxing Xu
Hao Fan
Kaixin Li
Kairan Yang
FedML
4
12
0
08 Dec 2021
Communication-Efficient Distributed Learning via Sparse and Adaptive
  Stochastic Gradient
Communication-Efficient Distributed Learning via Sparse and Adaptive Stochastic Gradient
Xiaoge Deng
Dongsheng Li
Tao Sun
Xicheng Lu
FedML
26
0
0
08 Dec 2021
Collaborative Learning over Wireless Networks: An Introductory Overview
Collaborative Learning over Wireless Networks: An Introductory Overview
Emre Ozfatura
Deniz Gunduz
H. Vincent Poor
30
11
0
07 Dec 2021
Edge Artificial Intelligence for 6G: Vision, Enabling Technologies, and
  Applications
Edge Artificial Intelligence for 6G: Vision, Enabling Technologies, and Applications
Khaled B. Letaief
Yuanming Shi
Jianmin Lu
Jianhua Lu
48
417
0
24 Nov 2021
Mesa: A Memory-saving Training Framework for Transformers
Mesa: A Memory-saving Training Framework for Transformers
Zizheng Pan
Peng Chen
Haoyu He
Jing Liu
Jianfei Cai
Bohan Zhuang
31
20
0
22 Nov 2021
Doing More by Doing Less: How Structured Partial Backpropagation
  Improves Deep Learning Clusters
Doing More by Doing Less: How Structured Partial Backpropagation Improves Deep Learning Clusters
Adarsh Kumar
Kausik Subramanian
Shivaram Venkataraman
Aditya Akella
17
4
0
20 Nov 2021
CGX: Adaptive System Support for Communication-Efficient Deep Learning
CGX: Adaptive System Support for Communication-Efficient Deep Learning
I. Markov
Hamidreza Ramezanikebrya
Dan Alistarh
GNN
21
4
0
16 Nov 2021
Wyner-Ziv Gradient Compression for Federated Learning
Wyner-Ziv Gradient Compression for Federated Learning
Kai Liang
Huiru Zhong
Haoning Chen
Youlong Wu
FedML
29
8
0
16 Nov 2021
Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders
  up to 100 Trillion Parameters
Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters
Xiangru Lian
Binhang Yuan
Xuefeng Zhu
Yulong Wang
Yongjun He
...
Lei Yuan
Hai-bo Yu
Sen Yang
Ce Zhang
Ji Liu
VLM
33
34
0
10 Nov 2021
Finite-Time Consensus Learning for Decentralized Optimization with
  Nonlinear Gossiping
Finite-Time Consensus Learning for Decentralized Optimization with Nonlinear Gossiping
Junya Chen
Sijia Wang
Lawrence Carin
Chenyang Tao
8
3
0
04 Nov 2021
Basis Matters: Better Communication-Efficient Second Order Methods for
  Federated Learning
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
24
23
0
02 Nov 2021
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
32
14
0
01 Nov 2021
Optimal Compression of Locally Differentially Private Mechanisms
Optimal Compression of Locally Differentially Private Mechanisms
Abhin Shah
Wei-Ning Chen
Johannes Ballé
Peter Kairouz
Lucas Theis
35
42
0
29 Oct 2021
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d.
  Environments
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d. Environments
Joost Verbraeken
M. Vos
J. Pouwelse
31
4
0
21 Oct 2021
Layer-wise Adaptive Model Aggregation for Scalable Federated Learning
Layer-wise Adaptive Model Aggregation for Scalable Federated Learning
Sunwoo Lee
Tuo Zhang
Chaoyang He
Salman Avestimehr
FedML
6
49
0
19 Oct 2021
Trade-offs of Local SGD at Scale: An Empirical Study
Trade-offs of Local SGD at Scale: An Empirical Study
Jose Javier Gonzalez Ortiz
Jonathan Frankle
Michael G. Rabbat
Ari S. Morcos
Nicolas Ballas
FedML
43
19
0
15 Oct 2021
Leveraging Spatial and Temporal Correlations in Sparsified Mean
  Estimation
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala
Ankur Mallick
Advait Gadhikar
S. Kadhe
Gauri Joshi
24
10
0
14 Oct 2021
ProgFed: Effective, Communication, and Computation Efficient Federated
  Learning by Progressive Training
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedML
AI4CE
36
48
0
11 Oct 2021
Solon: Communication-efficient Byzantine-resilient Distributed Training
  via Redundant Gradients
Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients
Lingjiao Chen
Leshang Chen
Hongyi Wang
S. Davidson
Yan Sun
FedML
37
1
0
04 Oct 2021
Unbiased Single-scale and Multi-scale Quantizers for Distributed
  Optimization
Unbiased Single-scale and Multi-scale Quantizers for Distributed Optimization
S. Vineeth
MQ
28
0
0
26 Sep 2021
Toward Efficient Federated Learning in Multi-Channeled Mobile Edge
  Network with Layerd Gradient Compression
Toward Efficient Federated Learning in Multi-Channeled Mobile Edge Network with Layerd Gradient Compression
Haizhou Du
Xiaojie Feng
Qiao Xiang
Haoyu Liu
39
0
0
18 Sep 2021
Fast Federated Edge Learning with Overlapped Communication and
  Computation and Channel-Aware Fair Client Scheduling
Fast Federated Edge Learning with Overlapped Communication and Computation and Channel-Aware Fair Client Scheduling
M. E. Ozfatura
Junlin Zhao
Deniz Gündüz
32
15
0
14 Sep 2021
Fundamental limits of over-the-air optimization: Are analog schemes
  optimal?
Fundamental limits of over-the-air optimization: Are analog schemes optimal?
Shubham K. Jha
Prathamesh Mayekar
Himanshu Tyagi
29
7
0
11 Sep 2021
Toward Communication Efficient Adaptive Gradient Method
Toward Communication Efficient Adaptive Gradient Method
Xiangyi Chen
Xiaoyun Li
P. Li
FedML
42
41
0
10 Sep 2021
Efficient Visual Recognition with Deep Neural Networks: A Survey on
  Recent Advances and New Directions
Efficient Visual Recognition with Deep Neural Networks: A Survey on Recent Advances and New Directions
Yang Wu
Dingheng Wang
Xiaotong Lu
Fan Yang
Guoqi Li
W. Dong
Jianbo Shi
29
18
0
30 Aug 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for
  Federated Learning
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
46
46
0
19 Aug 2021
On the Future of Cloud Engineering
On the Future of Cloud Engineering
David Bermbach
A. Chandra
C. Krintz
A. Gokhale
Aleksander Slominski
L. Thamsen
Everton Cavalcante
Tian Guo
Ivona Brandić
R. Wolski
43
23
0
19 Aug 2021
Compressing gradients by exploiting temporal correlation in momentum-SGD
Compressing gradients by exploiting temporal correlation in momentum-SGD
Tharindu B. Adikari
S. Draper
13
0
0
17 Aug 2021
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated
  Learning
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Nam Hyeon-Woo
Moon Ye-Bin
Tae-Hyun Oh
FedML
16
115
0
13 Aug 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
50
56
0
02 Aug 2021
DQ-SGD: Dynamic Quantization in SGD for Communication-Efficient
  Distributed Learning
DQ-SGD: Dynamic Quantization in SGD for Communication-Efficient Distributed Learning
Guangfeng Yan
Shao-Lun Huang
Tian-Shing Lan
Linqi Song
MQ
14
6
0
30 Jul 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
31
30
0
03 Jul 2021
AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive
  Pruning
AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive Pruning
Guangmeng Zhou
Ke Xu
Qi Li
Yang Liu
Yi Zhao
24
8
0
27 Jun 2021
CD-SGD: Distributed Stochastic Gradient Descent with Compression and
  Delay Compensation
CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
Enda Yu
Dezun Dong
Yemao Xu
Shuo Ouyang
Xiangke Liao
16
5
0
21 Jun 2021
CFedAvg: Achieving Efficient Communication and Fast Convergence in
  Non-IID Federated Learning
CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning
Haibo Yang
Jia Liu
Elizabeth S. Bentley
FedML
21
17
0
14 Jun 2021
Federated Learning on Non-IID Data: A Survey
Federated Learning on Non-IID Data: A Survey
Hangyu Zhu
Jinjin Xu
Shiqing Liu
Yaochu Jin
OOD
FedML
39
774
0
12 Jun 2021
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
Laurent Condat
Peter Richtárik
12
19
0
06 Jun 2021
Fast Federated Learning by Balancing Communication Trade-Offs
Fast Federated Learning by Balancing Communication Trade-Offs
Milad Khademi Nori
Sangseok Yun
Il-Min Kim
FedML
32
53
0
23 May 2021
Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on
  Gradient-Free ADMM Framework
Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM Framework
Junxiang Wang
Hongyi Li
Zheng Chai
Yongchao Wang
Yue Cheng
Liang Zhao
MQ
27
3
0
20 May 2021
DRIVE: One-bit Distributed Mean Estimation
DRIVE: One-bit Distributed Mean Estimation
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
OOD
FedML
84
52
0
18 May 2021
Compressed Communication for Distributed Training: Adaptive Methods and
  System
Compressed Communication for Distributed Training: Adaptive Methods and System
Yuchen Zhong
Cong Xie
Shuai Zheng
Yanghua Peng
48
9
0
17 May 2021
Towards Demystifying Serverless Machine Learning Training
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
19
122
0
17 May 2021
DP-SIGNSGD: When Efficiency Meets Privacy and Robustness
DP-SIGNSGD: When Efficiency Meets Privacy and Robustness
Lingjuan Lyu
FedML
AAML
27
19
0
11 May 2021
Slashing Communication Traffic in Federated Learning by Transmitting
  Clustered Model Updates
Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Yi Pan
FedML
38
36
0
10 May 2021
Scalable Projection-Free Optimization
Scalable Projection-Free Optimization
Mingrui Zhang
28
0
0
07 May 2021
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Ali Ramezani-Kebrya
Fartash Faghri
Ilya Markov
V. Aksenov
Dan Alistarh
Daniel M. Roy
MQ
65
31
0
28 Apr 2021
Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep
  Learning
Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning
Shijian Li
Oren Mangoubi
Lijie Xu
Tian Guo
47
15
0
16 Apr 2021
Previous
12345...8910
Next