ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1612.00383
  4. Cited By
Tuning the Scheduling of Distributed Stochastic Gradient Descent with
  Bayesian Optimization

Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization

1 December 2016
Valentin Dalibard
Michael Schaarschmidt
Eiko Yoneki
ArXivPDFHTML

Papers citing "Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization"

1 / 1 papers shown
Title
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
312
2,896
0
15 Sep 2016
1