Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.05699
Cited By
v1
v2
v3 (latest)
Gradient Diversity: a Key Ingredient for Scalable Distributed Learning
18 June 2017
Dong Yin
A. Pananjady
Max Lam
Dimitris Papailiopoulos
Kannan Ramchandran
Peter L. Bartlett
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Gradient Diversity: a Key Ingredient for Scalable Distributed Learning"
8 / 8 papers shown
Title
Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
John Nguyen
Jianyu Wang
Kshitiz Malik
Maziar Sanjabi
Michael G. Rabbat
FedML
AI4CE
95
75
0
14 Oct 2022
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
101
316
0
11 Jun 2021
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal
Hongyi Wang
Kangwook Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
85
25
0
29 Oct 2020
Improving the convergence of SGD through adaptive batch sizes
Scott Sievert
Zachary B. Charles
ODL
74
8
0
18 Oct 2019
Ray Interference: a Source of Plateaus in Deep Reinforcement Learning
Tom Schaul
Diana Borsa
Joseph Modayil
Razvan Pascanu
79
63
0
25 Apr 2019
The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent
Karthik A. Sankararaman
Soham De
Zheng Xu
Wenjie Huang
Tom Goldstein
ODL
122
106
0
15 Apr 2019
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
82
198
0
03 Mar 2018
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Pratik Chaudhari
Stefano Soatto
MLT
106
304
0
30 Oct 2017
1