Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.10188
Cited By
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates
26 June 2018
Yossi Arjevani
Ohad Shamir
Nathan Srebro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates"
19 / 19 papers shown
Title
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Artavazd Maranjyan
A. Tyurin
Peter Richtárik
49
3
0
27 Jan 2025
Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous ML
T. Dahan
Kfir Y. Levy
75
0
0
17 Jan 2025
Faster Stochastic Optimization with Arbitrary Delays via Asynchronous Mini-Batching
Amit Attia
Ofir Gaash
Tomer Koren
40
0
0
14 Aug 2024
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
67
0
0
27 Jul 2024
Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays
Charikleia Iakovidou
Kibaek Kim
FedML
35
2
0
16 May 2024
PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning
Mingjia Shi
Yuhao Zhou
Kai Wang
Huaizheng Zhang
Shudong Huang
Qing Ye
Jiangcheng Lv
34
10
0
13 Oct 2023
Convergence Analysis of Decentralized ASGD
Mauro Dalle Lucca Tosi
Martin Theobald
33
2
0
07 Sep 2023
DoCoFL: Downlink Compression for Cross-Device Federated Learning
Ron Dorfman
S. Vargaftik
Y. Ben-Itzhak
Kfir Y. Levy
FedML
37
19
0
01 Feb 2023
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
Max Ryabinin
Tim Dettmers
Michael Diskin
Alexander Borzunov
MoE
35
31
0
27 Jan 2023
Escaping From Saddle Points Using Asynchronous Coordinate Gradient Descent
Marco Bornstein
Jin-Peng Liu
Jingling Li
Furong Huang
23
0
0
17 Nov 2022
PersA-FL: Personalized Asynchronous Federated Learning
Taha Toghani
Soomin Lee
César A. Uribe
FedML
59
6
0
03 Oct 2022
Characterizing & Finding Good Data Orderings for Fast Convergence of Sequential Gradient Methods
Amirkeivan Mohtashami
Sebastian U. Stich
Martin Jaggi
26
13
0
03 Feb 2022
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
53
11
0
21 Oct 2021
Fast Federated Learning in the Presence of Arbitrary Device Unavailability
Xinran Gu
Kaixuan Huang
Jingzhao Zhang
Longbo Huang
FedML
35
96
0
08 Jun 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
39
33
0
04 Mar 2021
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
25
20
0
11 Sep 2019
Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization
Blake E. Woodworth
Jialei Wang
Adam D. Smith
H. B. McMahan
Nathan Srebro
22
123
0
25 May 2018
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
104
572
0
08 Dec 2012
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
182
683
0
07 Dec 2010
1