ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.13476
82
53

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

24 June 2020
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
ArXivPDFHTML
Abstract

We design an algorithm which finds an ϵ\epsilonϵ-approximate stationary point (with ∥∇F(x)∥≤ϵ\|\nabla F(x)\|\le \epsilon∥∇F(x)∥≤ϵ) using O(ϵ−3)O(\epsilon^{-3})O(ϵ−3) stochastic gradient and Hessian-vector products, matching guarantees that were previously available only under a stronger assumption of access to multiple queries with the same random seed. We prove a lower bound which establishes that this rate is optimal and---surprisingly---that it cannot be improved using stochastic pppth order methods for any p≥2p\ge 2p≥2, even when the first ppp derivatives of the objective are Lipschitz. Together, these results characterize the complexity of non-convex stochastic optimization with second-order methods and beyond. Expanding our scope to the oracle complexity of finding (ϵ,γ)(\epsilon,\gamma)(ϵ,γ)-approximate second-order stationary points, we establish nearly matching upper and lower bounds for stochastic second-order methods. Our lower bounds here are novel even in the noiseless case.

View on arXiv
Comments on this paper