ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09951
31
0

Towards Weaker Variance Assumptions for Stochastic Optimization

14 April 2025
Ahmet Alacaoglu
Yura Malitsky
Stephen J. Wright
ArXivPDFHTML
Abstract

We revisit a classical assumption for analyzing stochastic gradient algorithms where the squared norm of the stochastic subgradient (or the variance for smooth problems) is allowed to grow as fast as the squared norm of the optimization variable. We contextualize this assumption in view of its inception in the 1960s, its seemingly independent appearance in the recent literature, its relationship to weakest-known variance assumptions for analyzing stochastic gradient algorithms, and its relevance in deterministic problems for non-Lipschitz nonsmooth convex optimization. We build on and extend a connection recently made between this assumption and the Halpern iteration. For convex nonsmooth, and potentially stochastic, optimization, we analyze horizon-free, anytime algorithms with last-iterate rates. For problems beyond simple constrained optimization, such as convex problems with functional constraints or regularized convex-concave min-max problems, we obtain rates for optimality measures that do not require boundedness of the feasible set.

View on arXiv
@article{alacaoglu2025_2504.09951,
  title={ Towards Weaker Variance Assumptions for Stochastic Optimization },
  author={ Ahmet Alacaoglu and Yura Malitsky and Stephen J. Wright },
  journal={arXiv preprint arXiv:2504.09951},
  year={ 2025 }
}
Comments on this paper