ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.01050
26
57

Understanding the unstable convergence of gradient descent

3 April 2022
Kwangjun Ahn
Junzhe Zhang
S. Sra
ArXivPDFHTML
Abstract

Most existing analyses of (stochastic) gradient descent rely on the condition that for LLL-smooth costs, the step size is less than 2/L2/L2/L. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent still converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon.

View on arXiv
Comments on this paper