Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.07041
Cited By
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization
13 May 2020
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization"
10 / 10 papers shown
Title
Communication Optimization for Decentralized Learning atop Bandwidth-limited Edge Networks
Tingyang Sun
Tuan Nguyen
Ting He
37
0
0
16 Apr 2025
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
21
12
0
17 May 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
32
9
0
14 Jan 2023
Quantization for decentralized learning under subspace constraints
Roula Nassif
Stefan Vlaski
Marco Carpentiero
Vincenzo Matta
Marc Antonini
Ali H. Sayed
30
29
0
16 Sep 2022
BEER: Fast
O
(
1
/
T
)
O(1/T)
O
(
1/
T
)
Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
26
48
0
31 Jan 2022
Decentralized Multi-Task Stochastic Optimization With Compressed Communications
Navjot Singh
Xuanyu Cao
Suhas Diggavi
Tamer Basar
18
9
0
23 Dec 2021
Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis
Ziyi Chen
Yi Zhou
Rongrong Chen
Shaofeng Zou
15
24
0
08 Sep 2021
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
Kaan Ozkara
Navjot Singh
Deepesh Data
Suhas Diggavi
FedML
MQ
24
56
0
29 Jul 2021
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
33
271
0
02 Jul 2020
1