Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2501.16168
Cited By
v1
v2
v3 (latest)
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
27 January 2025
Artavazd Maranjyan
Alexander Tyurin
Peter Richtárik
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity"
23 / 23 papers shown
Title
BurTorch: Revisiting Training from First Principles by Coupling Autodiff, Math Optimization, and Systems
Konstantin Burlachenko
Peter Richtárik
AI4CE
68
0
0
18 Mar 2025
ATA: Adaptive Task Allocation for Efficient Resource Management in Distributed Machine Learning
Artavazd Maranjyan
El Mehdi Saad
Peter Richtárik
Francesco Orabona
192
1
0
02 Feb 2025
MindFlayer SGD: Efficient Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times
Artavazd Maranjyan
Omar Shaikh Omar
Peter Richtárik
84
4
0
05 Oct 2024
Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations
Alexander Tyurin
Kaja Gruntkowska
Peter Richtárik
82
3
0
24 May 2024
AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms
Rustem Islamov
M. Safaryan
Dan Alistarh
FedML
86
15
0
31 Oct 2023
Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
FedML
70
82
0
16 Jun 2022
Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Konstantin Mishchenko
Francis R. Bach
Mathieu Even
Blake E. Woodworth
83
61
0
15 Jun 2022
Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees
Hamid Reza Feyzmahdavian
M. Johansson
85
21
0
09 Sep 2021
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Alon Cohen
Amit Daniely
Yoel Drori
Tomer Koren
Mariano Schain
100
33
0
22 Jun 2021
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
131
75
0
15 Jun 2020
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
304
6,354
0
10 Dec 2019
Lower Bounds for Non-Convex Stochastic Optimization
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake E. Woodworth
129
362
0
05 Dec 2019
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates
Yossi Arjevani
Ohad Shamir
Nathan Srebro
77
64
0
26 Jun 2018
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization
Blake E. Woodworth
Jialei Wang
Adam D. Smith
H. B. McMahan
Nathan Srebro
88
124
0
25 May 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
82
198
0
03 Mar 2018
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
Lam M. Nguyen
Phuong Ha Nguyen
Marten van Dijk
Peter Richtárik
K. Scheinberg
Martin Takáč
113
228
0
11 Feb 2018
Asynchronous Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Wei Zhang
Ce Zhang
Ji Liu
ODL
103
501
0
18 Oct 2017
Optimal algorithms for smooth and strongly convex distributed optimization in networks
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
114
331
0
28 Feb 2017
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
314
4,681
0
18 Oct 2016
Revisiting Distributed Synchronous SGD
Jianmin Chen
Xinghao Pan
R. Monga
Samy Bengio
Rafal Jozefowicz
109
803
0
04 Apr 2016
An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
Hamid Reza Feyzmahdavian
Arda Aytekin
M. Johansson
77
117
0
18 May 2015
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
169
1,562
0
22 Sep 2013
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
233
2,275
0
28 Jun 2011
1