ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.05033
30
1

Gradient Descent on Logistic Regression with Non-Separable Data and Large Step Sizes

7 June 2024
Si Yi Meng
Antonio Orvieto
Daniel Yiming Cao
Christopher De Sa
ArXivPDFHTML
Abstract

We study gradient descent (GD) dynamics on logistic regression problems with large, constant step sizes. For linearly-separable data, it is known that GD converges to the minimizer with arbitrarily large step sizes, a property which no longer holds when the problem is not separable. In fact, the behaviour can be much more complex -- a sequence of period-doubling bifurcations begins at the critical step size 2/λ2/\lambda2/λ, where λ\lambdaλ is the largest eigenvalue of the Hessian at the solution. Using a smaller-than-critical step size guarantees convergence if initialized nearby the solution: but does this suffice globally? In one dimension, we show that a step size less than 1/λ1/\lambda1/λ suffices for global convergence. However, for all step sizes between 1/λ1/\lambda1/λ and the critical step size 2/λ2/\lambda2/λ, one can construct a dataset such that GD converges to a stable cycle. In higher dimensions, this is actually possible even for step sizes less than 1/λ1/\lambda1/λ. Our results show that although local convergence is guaranteed for all step sizes less than the critical step size, global convergence is not, and GD may instead converge to a cycle depending on the initialization.

View on arXiv
Comments on this paper