ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.04746
  4. Cited By
On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms

On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms

15 September 2016
Robert Hannah
W. Yin
ArXivPDFHTML

Papers citing "On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms"

10 / 10 papers shown
Title
Escaping From Saddle Points Using Asynchronous Coordinate Gradient
  Descent
Escaping From Saddle Points Using Asynchronous Coordinate Gradient Descent
Marco Bornstein
Jin-Peng Liu
Jingling Li
Furong Huang
21
0
0
17 Nov 2022
Delay-adaptive step-sizes for asynchronous learning
Delay-adaptive step-sizes for asynchronous learning
Xuyang Wu
Sindri Magnússon
Hamid Reza Feyzmahdavian
M. Johansson
32
14
0
17 Feb 2022
Revisiting State Augmentation methods for Reinforcement Learning with
  Stochastic Delays
Revisiting State Augmentation methods for Reinforcement Learning with Stochastic Delays
Somjit Nath
Mayank Baranwal
H. Khadilkar
OffRL
22
28
0
17 Aug 2021
Layered gradient accumulation and modular pipeline parallelism: fast and
  efficient training of large language models
Layered gradient accumulation and modular pipeline parallelism: fast and efficient training of large language models
J. Lamy-Poirier
MoE
29
8
0
04 Jun 2021
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic
  Method using Deep Denoising Priors
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors
Yu Sun
Jiaming Liu
Yiran Sun
B. Wohlberg
Ulugbek S. Kamilov
37
15
0
03 Oct 2020
Delay-Aware Model-Based Reinforcement Learning for Continuous Control
Delay-Aware Model-Based Reinforcement Learning for Continuous Control
Baiming Chen
Mengdi Xu
Liang-Sheng Li
Ding Zhao
OffRL
37
63
0
11 May 2020
Taming Convergence for Asynchronous Stochastic Gradient Descent with
  Unbounded Delay in Non-Convex Learning
Taming Convergence for Asynchronous Stochastic Gradient Descent with Unbounded Delay in Non-Convex Learning
Xin Zhang
Jia-Wei Liu
Zhengyuan Zhu
16
17
0
24 May 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
194
0
03 Mar 2018
More Iterations per Second, Same Quality -- Why Asynchronous Algorithms
  may Drastically Outperform Traditional Ones
More Iterations per Second, Same Quality -- Why Asynchronous Algorithms may Drastically Outperform Traditional Ones
Robert Hannah
W. Yin
19
19
0
17 Aug 2017
A Primer on Coordinate Descent Algorithms
A Primer on Coordinate Descent Algorithms
Hao-Jun Michael Shi
Shenyinying Tu
Yangyang Xu
W. Yin
37
90
0
30 Sep 2016
1