ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.04522
  4. Cited By
Asynchronous Iterations in Optimization: New Sequence Results and
  Sharper Algorithmic Guarantees

Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees

9 September 2021
Hamid Reza Feyzmahdavian
M. Johansson
ArXivPDFHTML

Papers citing "Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees"

9 / 9 papers shown
Title
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Artavazd Maranjyan
Alexander Tyurin
Peter Richtárik
63
3
0
27 Jan 2025
Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
Konstantin Mishchenko
Francis R. Bach
Mathieu Even
Blake E. Woodworth
41
59
0
15 Jun 2022
Advances in Asynchronous Parallel and Distributed Optimization
Advances in Asynchronous Parallel and Distributed Optimization
By Mahmoud Assran
Arda Aytekin
Hamid Reza Feyzmahdavian
M. Johansson
Michael G. Rabbat
39
76
0
24 Jun 2020
AsyncQVI: Asynchronous-Parallel Q-Value Iteration for Discounted Markov
  Decision Processes with Near-Optimal Sample Complexity
AsyncQVI: Asynchronous-Parallel Q-Value Iteration for Discounted Markov Decision Processes with Near-Optimal Sample Complexity
Yibo Zeng
Fei Feng
W. Yin
35
3
0
03 Dec 2018
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed
  Learning
LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen
G. Giannakis
Tao Sun
W. Yin
47
298
0
25 May 2018
ARock: an Algorithmic Framework for Asynchronous Parallel Coordinate
  Updates
ARock: an Algorithmic Framework for Asynchronous Parallel Coordinate Updates
Zhimin Peng
Yangyang Xu
Ming Yan
W. Yin
38
258
0
08 Jun 2015
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
88
1,817
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
135
738
0
19 Mar 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
219
1,245
0
10 Sep 2013
1