ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.08772
  4. Cited By
Understanding Top-k Sparsification in Distributed Deep Learning

Understanding Top-k Sparsification in Distributed Deep Learning

20 November 2019
S. Shi
Xiangxiang Chu
Ka Chun Cheung
Simon See
ArXivPDFHTML

Papers citing "Understanding Top-k Sparsification in Distributed Deep Learning"

18 / 18 papers shown
Title
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Zhiyong Jin
Runhua Xu
Chong Li
Y. Liu
Jianxin Li
AAML
FedML
46
0
0
30 Apr 2025
Delayed Random Partial Gradient Averaging for Federated Learning
Delayed Random Partial Gradient Averaging for Federated Learning
Xinyi Hu
FedML
48
0
0
31 Dec 2024
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections
Roy Miles
Pradyumna Reddy
Ismail Elezi
Jiankang Deng
VLM
43
3
0
28 May 2024
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Zhe Li
Bicheng Ying
Zidong Liu
Haibo Yang
Haibo Yang
FedML
59
3
0
24 May 2024
SignSGD with Federated Voting
SignSGD with Federated Voting
Chanho Park
H. Vincent Poor
Namyoon Lee
FedML
40
1
0
25 Mar 2024
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL
  Training
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training
S. Tyagi
Martin Swany
25
4
0
20 May 2023
Personalized Privacy-Preserving Framework for Cross-Silo Federated
  Learning
Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning
Van Tuan Tran
Huy Hieu Pham
Kok-Seng Wong
FedML
39
7
0
22 Feb 2023
Towards Efficient Communications in Federated Learning: A Contemporary
  Survey
Towards Efficient Communications in Federated Learning: A Contemporary Survey
Zihao Zhao
Yuzhu Mao
Yang Liu
Linqi Song
Ouyang Ye
Xinlei Chen
Wenbo Ding
FedML
59
60
0
02 Aug 2022
DNN gradient lossless compression: Can GenNorm be the answer?
DNN gradient lossless compression: Can GenNorm be the answer?
Zhongzhu Chen
Eduin E. Hernandez
Yu-Chih Huang
Stefano Rini
33
9
0
15 Nov 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
45
56
0
02 Aug 2021
ScaleCom: Scalable Sparsified Gradient Compression for
  Communication-Efficient Distributed Training
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
29
66
0
21 Apr 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
31
46
0
28 Feb 2021
Time-Correlated Sparsification for Communication-Efficient Federated
  Learning
Time-Correlated Sparsification for Communication-Efficient Federated Learning
Emre Ozfatura
Kerem Ozfatura
Deniz Gunduz
FedML
38
47
0
21 Jan 2021
Bayesian Federated Learning over Wireless Networks
Bayesian Federated Learning over Wireless Networks
Seunghoon Lee
Chanhoo Park
Songnam Hong
Yonina C. Eldar
Namyoon Lee
31
23
0
31 Dec 2020
FetchSGD: Communication-Efficient Federated Learning with Sketching
FetchSGD: Communication-Efficient Federated Learning with Sketching
D. Rothchild
Ashwinee Panda
Enayat Ullah
Nikita Ivkin
Ion Stoica
Vladimir Braverman
Joseph E. Gonzalez
Raman Arora
FedML
28
361
0
15 Jul 2020
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy
  Efficient Inference
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
Ali Hadi Zadeh
Isak Edo
Omar Mohamed Awad
Andreas Moshovos
MQ
30
183
0
08 May 2020
Communication-Efficient Decentralized Learning with Sparsification and
  Adaptive Peer Selection
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
S. Shi
Xiangxiang Chu
FedML
21
57
0
22 Feb 2020
Layer-wise Adaptive Gradient Sparsification for Distributed Deep
  Learning with Convergence Guarantees
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiangxiang Chu
19
22
0
20 Nov 2019
1