Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.10422
Cited By
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
23 March 2020
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Unified Theory of Decentralized SGD with Changing Topology and Local Updates"
21 / 71 papers shown
Title
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
152
1,056
0
24 May 2018
D
2
^2
2
: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
29
349
0
19 Mar 2018
Communication Compression for Decentralized Training
Hanlin Tang
Shaoduo Gan
Ce Zhang
Tong Zhang
Ji Liu
41
272
0
17 Mar 2018
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
Siyuan Ma
Raef Bassily
M. Belkin
48
289
0
18 Dec 2017
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
A. Nedić
Alexander Olshevsky
Michael G. Rabbat
50
507
0
26 Sep 2017
On the convergence properties of a
K
K
K
-step averaging stochastic gradient descent algorithm for nonconvex optimization
Fan Zhou
Guojing Cong
115
233
0
03 Aug 2017
Clique Gossiping
Yang Liu
Bo Li
Brian D. O. Anderson
Guodong Shi
18
10
0
08 Jun 2017
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Ce Zhang
Huan Zhang
Cho-Jui Hsieh
Wei Zhang
Ji Liu
40
1,221
0
25 May 2017
Optimal algorithms for smooth and strongly convex distributed optimization in networks
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
54
326
0
28 Feb 2017
A New Perspective on Randomized Gossip Algorithms
Nicolas Loizou
Peter Richtárik
26
30
0
15 Oct 2016
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
96
1,886
0
08 Oct 2016
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
173
3,198
0
15 Jun 2016
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
234
17,328
0
17 Feb 2016
Asynchronous stochastic convex optimization
John C. Duchi
Sorathan Chaturapruek
Christopher Ré
36
87
0
04 Aug 2015
Communication Complexity of Distributed Convex Learning and Optimization
Yossi Arjevani
Ohad Shamir
74
207
0
05 Jun 2015
Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm
Deanna Needell
Nathan Srebro
Rachel A. Ward
101
551
0
21 Oct 2013
Asynchronous Distributed Optimization using a Randomized Alternating Direction Method of Multipliers
F. Iutzeler
Pascal Bianchi
P. Ciblat
W. Hachem
46
181
0
12 Mar 2013
Distributed optimization over time-varying directed graphs
A. Nedić
Alexander Olshevsky
47
993
0
10 Mar 2013
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark Schmidt
Francis R. Bach
163
260
0
10 Dec 2012
Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization
Alexander Rakhlin
Ohad Shamir
Karthik Sridharan
101
764
0
26 Sep 2011
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
241
683
0
07 Dec 2010
Previous
1
2