ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07878
  4. Cited By
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning

TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

22 May 2017
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning"

50 / 467 papers shown
Title
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Mengzhe Ruan
Guangfeng Yan
Yuanzhang Xiao
Linqi Song
Weitao Xu
40
3
0
24 Oct 2022
Provably Doubly Accelerated Federated Learning: The First Theoretically
  Successful Combination of Local Training and Communication Compression
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression
Laurent Condat
Ivan Agarský
Peter Richtárik
FedML
40
17
0
24 Oct 2022
ScionFL: Efficient and Robust Secure Quantized Aggregation
ScionFL: Efficient and Robust Secure Quantized Aggregation
Y. Ben-Itzhak
Helen Mollering
Benny Pinkas
T. Schneider
Ajith Suresh
Oleksandr Tkachenko
S. Vargaftik
Christian Weinert
Hossein Yalame
Avishay Yanai
43
6
0
13 Oct 2022
Sparse Random Networks for Communication-Efficient Federated Learning
Sparse Random Networks for Communication-Efficient Federated Learning
Berivan Isik
Francesco Pase
Deniz Gunduz
Tsachy Weissman
M. Zorzi
FedML
70
52
0
30 Sep 2022
Personalized Federated Learning with Communication Compression
Personalized Federated Learning with Communication Compression
El Houcine Bergou
Konstantin Burlachenko
Aritra Dutta
Peter Richtárik
FedML
80
9
0
12 Sep 2022
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient
  Method for Distributed Learning in Computing Clusters
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Zhuqing Liu
Xin Zhang
Jia-Wei Liu
38
1
0
17 Aug 2022
A Fast Blockchain-based Federated Learning Framework with Compressed
  Communications
A Fast Blockchain-based Federated Learning Framework with Compressed Communications
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
FedML
22
23
0
12 Aug 2022
Quantized Adaptive Subgradient Algorithms and Their Applications
Quantized Adaptive Subgradient Algorithms and Their Applications
Ke Xu
Jianqiao Wangni
Yifan Zhang
Deheng Ye
Jiaxiang Wu
P. Zhao
36
0
0
11 Aug 2022
Quantization enabled Privacy Protection in Decentralized Stochastic
  Optimization
Quantization enabled Privacy Protection in Decentralized Stochastic Optimization
Yongqiang Wang
Tamer Basar
32
44
0
07 Aug 2022
BiFeat: Supercharge GNN Training via Graph Feature Quantization
BiFeat: Supercharge GNN Training via Graph Feature Quantization
Yuxin Ma
Ping Gong
Jun Yi
Z. Yao
Cheng-rong Li
Yuxiong He
Feng Yan
GNN
21
6
0
29 Jul 2022
Reconciling Security and Communication Efficiency in Federated Learning
Reconciling Security and Communication Efficiency in Federated Learning
Karthik Prasad
Sayan Ghosh
Graham Cormode
Ilya Mironov
Ashkan Yousefpour
Pierre Stock
FedML
38
8
0
26 Jul 2022
Quantized Training of Gradient Boosting Decision Trees
Quantized Training of Gradient Boosting Decision Trees
Yu Shi
Guolin Ke
Zhuoming Chen
Shuxin Zheng
Tie-Yan Liu
MQ
AI4CE
21
18
0
20 Jul 2022
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving
  Quantized Federated Learning
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning
Hua Ma
Qun Li
Yifeng Zheng
Zhi Zhang
Xiaoning Liu
Yan Gao
S. Al-Sarawi
Derek Abbott
FedML
42
3
0
19 Jul 2022
Fundamental Limits of Communication Efficiency for Model Aggregation in
  Distributed Learning: A Rate-Distortion Approach
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach
Naifu Zhang
M. Tao
Jia Wang
Fan Xu
19
13
0
28 Jun 2022
Efficient Adaptive Federated Optimization of Federated Learning for IoT
Efficient Adaptive Federated Optimization of Federated Learning for IoT
Zunming Chen
Hongyan Cui
Ensen Wu
Yu Xi
29
0
0
23 Jun 2022
sqSGD: Locally Private and Communication Efficient Federated Learning
sqSGD: Locally Private and Communication Efficient Federated Learning
Yan Feng
Tao Xiong
Ruofan Wu
Lingjuan Lv
Leilei Shi
FedML
31
2
0
21 Jun 2022
Shifted Compression Framework: Generalizations and Improvements
Shifted Compression Framework: Generalizations and Improvements
Egor Shulgin
Peter Richtárik
20
6
0
21 Jun 2022
Compressed-VFL: Communication-Efficient Learning with Vertically
  Partitioned Data
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data
Timothy Castiglia
Anirban Das
Shiqiang Wang
S. Patterson
FedML
27
48
0
16 Jun 2022
Communication-Efficient Robust Federated Learning with Noisy Labels
Communication-Efficient Robust Federated Learning with Noisy Labels
Junyi Li
Jian Pei
Heng Huang
FedML
34
18
0
11 Jun 2022
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with
  Communication Compression
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
40
25
0
08 Jun 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
42
16
0
07 Jun 2022
Fine-tuning Language Models over Slow Networks using Activation
  Compression with Guarantees
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Jue Wang
Binhang Yuan
Luka Rimanic
Yongjun He
Tri Dao
Beidi Chen
Christopher Ré
Ce Zhang
AI4CE
31
11
0
02 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker
  Assumptions and Communication Compression as a Cherry on the Top
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
21
0
0
01 Jun 2022
DisPFL: Towards Communication-Efficient Personalized Federated Learning
  via Decentralized Sparse Training
DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training
Rong Dai
Li Shen
Fengxiang He
Xinmei Tian
Dacheng Tao
FedML
27
112
0
01 Jun 2022
Efficient-Adam: Communication-Efficient Distributed Adam
Efficient-Adam: Communication-Efficient Distributed Adam
Congliang Chen
Li Shen
Wei Liu
Zhi-Quan Luo
34
19
0
28 May 2022
ByteComp: Revisiting Gradient Compression in Distributed Training
ByteComp: Revisiting Gradient Compression in Distributed Training
Zhuang Wang
Yanghua Peng
Yibo Zhu
T. Ng
20
2
0
28 May 2022
QUIC-FL: Quick Unbiased Compression for Federated Learning
QUIC-FL: Quick Unbiased Compression for Federated Learning
Ran Ben-Basat
S. Vargaftik
Amit Portnoy
Gil Einziger
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
72
13
0
26 May 2022
On Distributed Adaptive Optimization with Gradient Compression
On Distributed Adaptive Optimization with Gradient Compression
Xiaoyun Li
Belhal Karimi
Ping Li
23
25
0
11 May 2022
EF-BV: A Unified Theory of Error Feedback and Variance Reduction
  Mechanisms for Biased and Unbiased Compression in Distributed Optimization
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization
Laurent Condat
Kai Yi
Peter Richtárik
43
21
0
09 May 2022
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Samuel Horváth
Maziar Sanjabi
Lin Xiao
Peter Richtárik
Michael G. Rabbat
FedML
35
21
0
27 Apr 2022
Enable Deep Learning on Mobile Devices: Methods, Systems, and
  Applications
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications
Han Cai
Ji Lin
Chengyue Wu
Zhijian Liu
Haotian Tang
Hanrui Wang
Ligeng Zhu
Song Han
29
108
0
25 Apr 2022
Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop
  All-reduce with Ultimate Compression
Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression
Feijie Wu
Shiqi He
Song Guo
Zhihao Qu
Yining Qi
W. Zhuang
Jie Zhang
24
9
0
14 Apr 2022
FedSynth: Gradient Compression via Synthetic Data in Federated Learning
FedSynth: Gradient Compression via Synthetic Data in Federated Learning
Shengyuan Hu
Jack Goetz
Kshitiz Malik
Hongyuan Zhan
Zhe Liu
Yue Liu
DD
FedML
45
38
0
04 Apr 2022
Scaling Language Model Size in Cross-Device Federated Learning
Scaling Language Model Size in Cross-Device Federated Learning
Jae Hun Ro
Theresa Breiner
Lara McConnaughey
Mingqing Chen
A. Suresh
Shankar Kumar
Rajiv Mathews
FedML
34
24
0
31 Mar 2022
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks
  with Pipelined Feature Communication
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Cheng Wan
Youjie Li
Cameron R. Wolfe
Anastasios Kyrillidis
Namjae Kim
Yingyan Lin
GNN
36
67
0
20 Mar 2022
Approximability and Generalisation
Approximability and Generalisation
A. J. Turner
Ata Kabán
33
0
0
15 Mar 2022
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network
  Training and Inference
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Zhongzhi Yu
Y. Fu
Shang Wu
Mengquan Li
Haoran You
Yingyan Lin
28
1
0
15 Mar 2022
DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Zhuoran Song
Yihong Xu
Han Li
Naifeng Jing
Xiaoyao Liang
Li Jiang
36
3
0
11 Mar 2022
Correlated quantization for distributed mean estimation and optimization
Correlated quantization for distributed mean estimation and optimization
A. Suresh
Ziteng Sun
Jae Hun Ro
Felix X. Yu
36
12
0
09 Mar 2022
Linear Stochastic Bandits over a Bit-Constrained Channel
Linear Stochastic Bandits over a Bit-Constrained Channel
A. Mitra
Hamed Hassani
George J. Pappas
47
8
0
02 Mar 2022
Bitwidth Heterogeneous Federated Learning with Progressive Weight
  Dequantization
Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization
Jaehong Yoon
Geondo Park
Wonyong Jeong
Sung Ju Hwang
FedML
32
19
0
23 Feb 2022
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security
  for Distributed Learning
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
Chuan Ma
Jun Li
Kang Wei
Bo Liu
Ming Ding
Long Yuan
Zhu Han
H. Vincent Poor
64
43
0
18 Feb 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
24
49
0
15 Feb 2022
Maximizing Communication Efficiency for Large-scale Training via 0/1
  Adam
Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam
Yucheng Lu
Conglong Li
Minjia Zhang
Christopher De Sa
Yuxiong He
OffRL
AI4CE
29
20
0
12 Feb 2022
FL_PyTorch: optimization research simulator for federated learning
FL_PyTorch: optimization research simulator for federated learning
Konstantin Burlachenko
Samuel Horváth
Peter Richtárik
FedML
53
18
0
07 Feb 2022
Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?
Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?
Sadaf Salehkalaibar
Stefano Rini
FedML
35
4
0
06 Feb 2022
DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample
  Complexity
DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity
Chung-Yiu Yau
Hoi-To Wai
93
6
0
01 Feb 2022
Near-Optimal Sparse Allreduce for Distributed Deep Learning
Near-Optimal Sparse Allreduce for Distributed Deep Learning
Shigang Li
Torsten Hoefler
31
51
0
19 Jan 2022
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing
Yiding Wang
D. Sun
Kai Chen
Fan Lai
Mosharaf Chowdhury
35
44
0
17 Jan 2022
Optimizing the Communication-Accuracy Trade-off in Federated Learning
  with Rate-Distortion Theory
Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory
Nicole Mitchell
Johannes Ballé
Zachary B. Charles
Jakub Konecný
FedML
19
21
0
07 Jan 2022
Previous
123456...8910
Next