Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.05752
Cited By
Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization
10 June 2020
Nuwan S. Ferdinand
H. Al-Lawati
S. Draper
M. Nokleby
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization"
8 / 8 papers shown
Title
SignSGD with Federated Voting
Chanho Park
H. Vincent Poor
Namyoon Lee
FedML
40
1
0
25 Mar 2024
Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
S. Tyagi
Prateek Sharma
24
22
0
20 May 2023
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Wang
33
3
0
06 Oct 2022
Lightweight Projective Derivative Codes for Compressed Asynchronous Gradient Descent
Pedro Soto
Ilia Ilmer
Haibin Guan
Jun Li
35
3
0
31 Jan 2022
Trade-offs of Local SGD at Scale: An Empirical Study
Jose Javier Gonzalez Ortiz
Jonathan Frankle
Michael G. Rabbat
Ari S. Morcos
Nicolas Ballas
FedML
43
19
0
15 Oct 2021
Decentralized optimization with non-identical sampling in presence of stragglers
Tharindu B. Adikari
S. Draper
37
1
0
25 Aug 2021
Robust and Communication-Efficient Collaborative Learning
Amirhossein Reisizadeh
Hossein Taheri
Aryan Mokhtari
Hamed Hassani
Ramtin Pedarsani
25
89
0
24 Jul 2019
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
179
683
0
07 Dec 2010
1