ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07638
  4. Cited By
Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays

Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays

15 June 2022
Konstantin Mishchenko
Francis R. Bach
Mathieu Even
Blake E. Woodworth
ArXivPDFHTML

Papers citing "Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays"

16 / 16 papers shown
Title
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Artavazd Maranjyan
Alexander Tyurin
Peter Richtárik
63
3
0
27 Jan 2025
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
42
0
0
08 Oct 2024
Ordered Momentum for Asynchronous SGD
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
109
0
0
27 Jul 2024
Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework
Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework
Siyuan Yu
Wei Chen
H. V. Poor
61
0
0
17 Jun 2024
Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
Guangchen Lan
Dong-Jun Han
Abolfazl Hashemi
Vaneet Aggarwal
Christopher G. Brinton
135
15
0
09 Apr 2024
Communication-Efficient Federated Learning With Data and Client Heterogeneity
Communication-Efficient Federated Learning With Data and Client Heterogeneity
Hossein Zakerinia
Shayan Talaei
Giorgi Nadiradze
Dan Alistarh
FedML
48
8
0
20 Jun 2022
Advances in Asynchronous Parallel and Distributed Optimization
Advances in Asynchronous Parallel and Distributed Optimization
By Mahmoud Assran
Arda Aytekin
Hamid Reza Feyzmahdavian
M. Johansson
Michael G. Rabbat
39
76
0
24 Jun 2020
Is Local SGD Better than Minibatch SGD?
Is Local SGD Better than Minibatch SGD?
Blake E. Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. B. McMahan
Ohad Shamir
Nathan Srebro
FedML
39
254
0
18 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
26
182
0
09 Feb 2020
Stochastic Gradient Push for Distributed Deep Learning
Stochastic Gradient Push for Distributed Deep Learning
Mahmoud Assran
Nicolas Loizou
Nicolas Ballas
Michael G. Rabbat
44
342
0
27 Nov 2018
Local SGD Converges Fast and Communicates Little
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
150
1,056
0
24 May 2018
Revisiting Distributed Synchronous SGD
Revisiting Distributed Synchronous SGD
Jianmin Chen
Xinghao Pan
R. Monga
Samy Bengio
Rafal Jozefowicz
49
799
0
04 Apr 2016
Asynchronous Methods for Deep Reinforcement Learning
Asynchronous Methods for Deep Reinforcement Learning
Volodymyr Mnih
Adria Puigdomenech Badia
M. Berk Mirza
Alex Graves
Timothy Lillicrap
Tim Harley
David Silver
Koray Kavukcuoglu
150
8,805
0
04 Feb 2016
Massively Parallel Methods for Deep Reinforcement Learning
Massively Parallel Methods for Deep Reinforcement Learning
Arun Nair
Praveen Srinivasan
Sam Blackwell
Cagdas Alcicek
Rory Fearon
...
Stig Petersen
Shane Legg
Volodymyr Mnih
Koray Kavukcuoglu
David Silver
OffRL
AI4CE
GNN
58
504
0
15 Jul 2015
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic
  Programming
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
46
1,538
0
22 Sep 2013
Better Mini-Batch Algorithms via Accelerated Gradient Methods
Better Mini-Batch Algorithms via Accelerated Gradient Methods
Andrew Cotter
Ohad Shamir
Nathan Srebro
Karthik Sridharan
ODL
76
313
0
22 Jun 2011
1