ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.01478
29
0

Stochastic Newton Proximal Extragradient Method

3 June 2024
Ruichen Jiang
Michal Dereziñski
Aryan Mokhtari
ArXivPDFHTML
Abstract

Stochastic second-order methods achieve fast local convergence in strongly convex optimization by using noisy Hessian estimates to precondition the gradient. However, these methods typically reach superlinear convergence only when the stochastic Hessian noise diminishes, increasing per-iteration costs over time. Recent work in [arXiv:2204.09266] addressed this with a Hessian averaging scheme that achieves superlinear convergence without higher per-iteration costs. Nonetheless, the method has slow global convergence, requiring up to O~(κ2)\tilde{O}(\kappa^2)O~(κ2) iterations to reach the superlinear rate of O~((1/t)t/2)\tilde{O}((1/t)^{t/2})O~((1/t)t/2), where κ\kappaκ is the problem's condition number. In this paper, we propose a novel stochastic Newton proximal extragradient method that improves these bounds, achieving a faster global linear rate and reaching the same fast superlinear rate in O~(κ)\tilde{O}(\kappa)O~(κ) iterations. We accomplish this by extending the Hybrid Proximal Extragradient (HPE) framework, achieving fast global and local convergence rates for strongly convex functions with access to a noisy Hessian oracle.

View on arXiv
Comments on this paper