Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.00533
Cited By
Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks
1 November 2022
Xinmeng Huang
Kun Yuan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks"
17 / 17 papers shown
Title
Communication-Efficient Federated Optimization over Semi-Decentralized Networks
He Wang
Yuejie Chi
FedML
110
2
0
30 Nov 2023
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
72
27
0
08 Jun 2022
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
68
62
0
19 Oct 2021
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks
D. Kovalev
Elnur Gasanov
Peter Richtárik
Alexander Gasnikov
46
44
0
08 Jun 2021
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
84
64
0
24 Apr 2021
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
Tao R. Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
73
101
0
09 Feb 2021
An improved convergence analysis for decentralized online stochastic non-convex optimization
Ran Xin
U. Khan
S. Kar
89
104
0
10 Aug 2020
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
72
75
0
15 Jun 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
78
506
0
23 Mar 2020
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
60
163
0
23 May 2019
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization
Parvin Nazari
Davoud Ataee Tarzanagh
George Michailidis
ODL
62
67
0
25 Jan 2019
Stochastic Gradient Push for Distributed Deep Learning
Mahmoud Assran
Nicolas Loizou
Nicolas Ballas
Michael G. Rabbat
73
345
0
27 Nov 2018
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
166
1,061
0
24 May 2018
D
2
^2
2
: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
31
350
0
19 Mar 2018
Optimal algorithms for smooth and strongly convex distributed optimization in networks
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
58
329
0
28 Feb 2017
NEXT: In-Network Nonconvex Optimization
P. Lorenzo
G. Scutari
91
508
0
01 Feb 2016
Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks
Jianshu Chen
Ali H. Sayed
93
654
0
31 Oct 2011
1