ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.11468
267
59
v1v2 (latest)

Multi-Level Composite Stochastic Optimization via Nested Variance Reduction

SIAM Journal on Optimization (SIOPT), 2019
29 August 2019
Junyu Zhang
Lin Xiao
ArXiv (abs)PDFHTML
Abstract

We consider multi-level composite optimization problems where each mapping in the composition is the expectation over a family of random smooth mappings or the sum of some finite number of smooth mappings. We present a normalized proximal approximate gradient (NPAG) method where the approximate gradients are obtained via nested stochastic variance reduction. In order to find an approximate stationary point where the expected norm of its gradient mapping is less than ϵ\epsilonϵ, the total sample complexity of our method is O(ϵ−3)O(\epsilon^{-3})O(ϵ−3) in the expectation case, and O(N+Nϵ−2)O(N+\sqrt{N}\epsilon^{-2})O(N+N​ϵ−2) in the finite-sum case where NNN is the total number of functions across all composition levels. In addition, the dependence of our total sample complexity on the number of composition levels is polynomial, rather than exponential as in previous work.

View on arXiv
Comments on this paper