ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.12191
14
4

Escape saddle points faster on manifolds via perturbed Riemannian stochastic recursive gradient

23 October 2020
Andi Han
Junbin Gao
ArXivPDFHTML
Abstract

In this paper, we propose a variant of Riemannian stochastic recursive gradient method that can achieve second-order convergence guarantee and escape saddle points using simple perturbation. The idea is to perturb the iterates when gradient is small and carry out stochastic recursive gradient updates over tangent space. This avoids the complication of exploiting Riemannian geometry. We show that under finite-sum setting, our algorithm requires O~(nϵ2+nδ4+nδ3)\widetilde{\mathcal{O}}\big( \frac{ \sqrt{n}}{\epsilon^2} + \frac{\sqrt{n} }{\delta^4} + \frac{n}{\delta^3}\big)O(ϵ2n​​+δ4n​​+δ3n​) stochastic gradient queries to find a (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-second-order critical point. This strictly improves the complexity of perturbed Riemannian gradient descent and is superior to perturbed Riemannian accelerated gradient descent under large-sample settings. We also provide a complexity of O~(1ϵ3+1δ3ϵ2+1δ4ϵ)\widetilde{\mathcal{O}} \big( \frac{1}{\epsilon^3} + \frac{1}{\delta^3 \epsilon^2} + \frac{1}{\delta^4 \epsilon} \big)O(ϵ31​+δ3ϵ21​+δ4ϵ1​) for online optimization, which is novel on Riemannian manifold in terms of second-order convergence using only first-order information.

View on arXiv
Comments on this paper