ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.01652
18
3

Stochastic regularized majorization-minimization with weakly convex and multi-convex surrogates

5 January 2022
Hanbaek Lyu
ArXivPDFHTML
Abstract

Stochastic majorization-minimization (SMM) is a class of stochastic optimization algorithms that proceed by sampling new data points and minimizing a recursive average of surrogate functions of an objective function. The surrogates are required to be strongly convex and convergence rate analysis for the general non-convex setting was not available. In this paper, we propose an extension of SMM where surrogates are allowed to be only weakly convex or block multi-convex, and the averaged surrogates are approximately minimized with proximal regularization or block-minimized within diminishing radii, respectively. For the general nonconvex constrained setting with non-i.i.d. data samples, we show that the first-order optimality gap of the proposed algorithm decays at the rate O((log⁡n)1+ϵ/n1/2)O((\log n)^{1+\epsilon}/n^{1/2})O((logn)1+ϵ/n1/2) for the empirical loss and O((log⁡n)1+ϵ/n1/4)O((\log n)^{1+\epsilon}/n^{1/4})O((logn)1+ϵ/n1/4) for the expected loss, where nnn denotes the number of data samples processed. Under some additional assumption, the latter convergence rate can be improved to O((log⁡n)1+ϵ/n1/2)O((\log n)^{1+\epsilon}/n^{1/2})O((logn)1+ϵ/n1/2). As a corollary, we obtain the first convergence rate bounds for various optimization methods under general nonconvex dependent data setting: Double-averaging projected gradient descent and its generalizations, proximal point empirical risk minimization, and online matrix/tensor decomposition algorithms. We also provide experimental validation of our results.

View on arXiv
Comments on this paper