Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.05699
Cited By
Gradient Diversity: a Key Ingredient for Scalable Distributed Learning
18 June 2017
Dong Yin
A. Pananjady
Max Lam
Dimitris Papailiopoulos
Kannan Ramchandran
Peter L. Bartlett
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Gradient Diversity: a Key Ingredient for Scalable Distributed Learning"
7 / 7 papers shown
Title
Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
John Nguyen
Jianyu Wang
Kshitiz Malik
Maziar Sanjabi
Michael G. Rabbat
FedML
AI4CE
29
21
0
30 Jun 2022
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
33
289
0
11 Jun 2021
Ray Interference: a Source of Plateaus in Deep Reinforcement Learning
Tom Schaul
Diana Borsa
Joseph Modayil
Razvan Pascanu
16
63
0
25 Apr 2019
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
193
0
03 Mar 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,204
0
16 Aug 2016
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
182
683
0
07 Dec 2010
1