Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.09969
Cited By
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
25 May 2018
Zebang Shen
Aryan Mokhtari
Tengfei Zhou
P. Zhao
Hui Qian
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication"
8 / 8 papers shown
Title
Decentralized Sum-of-Nonconvex Optimization
Zhuanghua Liu
K. H. Low
13
0
0
04 Feb 2024
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
32
4
0
25 Nov 2022
Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee
Tianyi Lin
P. Mertikopoulos
Michael I. Jordan
24
11
0
23 Oct 2022
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
37
84
0
04 Feb 2021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Haishan Ye
Wei Xiong
Tong Zhang
16
16
0
30 Dec 2020
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
16
10
0
13 Feb 2020
A Decentralized Proximal Point-type Method for Saddle Point Problems
Weijie Liu
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
Zebang Shen
Nenggan Zheng
59
30
0
31 Oct 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
21
159
0
23 May 2019
1