ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.01959
28
1

Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions

4 June 2024
Wei Jiang
Sifan Yang
Yibo Wang
Lijun Zhang
ArXivPDFHTML
Abstract

This paper explores adaptive variance reduction methods for stochastic optimization based on the STORM technique. Existing adaptive extensions of STORM rely on strong assumptions like bounded gradients and bounded function values, or suffer an additional O(log⁡T)\mathcal{O}(\log T)O(logT) term in the convergence rate. To address these limitations, we introduce a novel adaptive STORM method that achieves an optimal convergence rate of O(T−1/3)\mathcal{O}(T^{-1/3})O(T−1/3) for non-convex functions with our newly designed learning rate strategy. Compared with existing approaches, our method requires weaker assumptions and attains the optimal convergence rate without the additional O(log⁡T)\mathcal{O}(\log T)O(logT) term. We also extend the proposed technique to stochastic compositional optimization, obtaining the same optimal rate of O(T−1/3)\mathcal{O}(T^{-1/3})O(T−1/3). Furthermore, we investigate the non-convex finite-sum problem and develop another innovative adaptive variance reduction method that achieves an optimal convergence rate of O(n1/4T−1/2)\mathcal{O}(n^{1/4} T^{-1/2} )O(n1/4T−1/2), where nnn represents the number of component functions. Numerical experiments across various tasks validate the effectiveness of our method.

View on arXiv
Comments on this paper