ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.09774
101
143
v1v2 (latest)

Asynchrony begets Momentum, with an Application to Deep Learning

31 May 2016
Jeff Donahue
Philipp Krahenbuhl
Stefan Hadjis
Christopher Ré
ArXiv (abs)PDFHTML
Abstract

Asynchronous methods are widely used in deep learning, but have limited theoretical justification when applied to non-convex problems. We give a simple argument that running stochastic gradient descent (SGD) in an asynchronous manner can be viewed as adding a momentum-like term to the SGD iteration. Our result does not assume convexity of the objective function, so is applicable to deep learning systems. We observe that a standard queuing model of asynchrony results in a form of momentum that is commonly used by deep learning practitioners. This forges a link between queuing theory and asynchrony in deep learning systems, which could be useful for systems builders. For convolutional neural networks, we experimentally validate that the degree of asynchrony directly correlates with the momentum, confirming our main result. Since asynchrony has better hardware efficiency, this result may shed light on when asynchronous execution is more efficient for deep learning systems.

View on arXiv
Comments on this paper