ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.05699
  4. Cited By
Gradient Diversity: a Key Ingredient for Scalable Distributed Learning
v1v2v3 (latest)

Gradient Diversity: a Key Ingredient for Scalable Distributed Learning

18 June 2017
Dong Yin
A. Pananjady
Max Lam
Dimitris Papailiopoulos
Kannan Ramchandran
Peter L. Bartlett
ArXiv (abs)PDFHTML

Papers citing "Gradient Diversity: a Key Ingredient for Scalable Distributed Learning"

8 / 8 papers shown
Title
Where to Begin? On the Impact of Pre-Training and Initialization in
  Federated Learning
Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
John Nguyen
Jianyu Wang
Kshitiz Malik
Maziar Sanjabi
Michael G. Rabbat
FedMLAI4CE
95
75
0
14 Oct 2022
Federated Learning with Buffered Asynchronous Aggregation
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
101
316
0
11 Jun 2021
Accordion: Adaptive Gradient Communication via Critical Learning Regime
  Identification
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal
Hongyi Wang
Kangwook Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
85
25
0
29 Oct 2020
Improving the convergence of SGD through adaptive batch sizes
Improving the convergence of SGD through adaptive batch sizes
Scott Sievert
Zachary B. Charles
ODL
74
8
0
18 Oct 2019
Ray Interference: a Source of Plateaus in Deep Reinforcement Learning
Ray Interference: a Source of Plateaus in Deep Reinforcement Learning
Tom Schaul
Diana Borsa
Joseph Modayil
Razvan Pascanu
79
63
0
25 Apr 2019
The Impact of Neural Network Overparameterization on Gradient Confusion
  and Stochastic Gradient Descent
The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent
Karthik A. Sankararaman
Soham De
Zheng Xu
Wenjie Huang
Tom Goldstein
ODL
122
106
0
15 Apr 2019
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
82
198
0
03 Mar 2018
Stochastic gradient descent performs variational inference, converges to
  limit cycles for deep networks
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Pratik Chaudhari
Stefano Soatto
MLT
106
304
0
30 Oct 2017
1