ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02976
  4. Cited By
Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers

Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers

6 October 2018
Nuwan S. Ferdinand
S. Draper
ArXivPDFHTML

Papers citing "Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers"

9 / 9 papers shown
Title
Lightweight Projective Derivative Codes for Compressed Asynchronous
  Gradient Descent
Lightweight Projective Derivative Codes for Compressed Asynchronous Gradient Descent
Pedro Soto
Ilia Ilmer
Haibin Guan
Jun Li
38
3
0
31 Jan 2022
Gradient Coding with Dynamic Clustering for Straggler-Tolerant
  Distributed Learning
Gradient Coding with Dynamic Clustering for Straggler-Tolerant Distributed Learning
Baturalp Buyukates
Emre Ozfatura
S. Ulukus
Deniz Gunduz
40
15
0
01 Mar 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Diversity/Parallelism Trade-off in Distributed Systems with Redundancy
Diversity/Parallelism Trade-off in Distributed Systems with Redundancy
Pei Peng
E. Soljanin
P. Whiting
37
14
0
05 Oct 2020
Coded Distributed Computing with Partial Recovery
Coded Distributed Computing with Partial Recovery
Emre Ozfatura
S. Ulukus
Deniz Gunduz
43
29
0
04 Jul 2020
Efficient Replication for Straggler Mitigation in Distributed Computing
Efficient Replication for Straggler Mitigation in Distributed Computing
Amir Behrouzi-Far
E. Soljanin
24
14
0
03 Jun 2020
Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in
  the Presence of Stragglers
Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers
Serge Kas Hanna
Rawad Bitar
Parimal Parag
Venkateswara Dasari
S. E. Rouayheb
35
16
0
25 Feb 2020
Distributed Training of Deep Neural Networks: Theoretical and Practical
  Limits of Parallel Scalability
Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability
J. Keuper
Franz-Josef Pfreundt
GNN
58
98
0
22 Sep 2016
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
186
683
0
07 Dec 2010
1