ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.11005
  4. Cited By
Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in
  the Presence of Stragglers

Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers

25 February 2020
Serge Kas Hanna
Rawad Bitar
Parimal Parag
Venkateswara Dasari
S. E. Rouayheb
ArXivPDFHTML

Papers citing "Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers"

3 / 3 papers shown
Title
On Gradient Coding with Partial Recovery
On Gradient Coding with Partial Recovery
Sahasrajit Sarmasarkar
V. Lalitha
Nikhil Karamchandani
20
8
0
19 Feb 2021
Taming Momentum in a Distributed Asynchronous Environment
Taming Momentum in a Distributed Asynchronous Environment
Ido Hakimi
Saar Barkai
Moshe Gabel
Assaf Schuster
13
23
0
26 Jul 2019
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
179
683
0
07 Dec 2010
1