ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.18577
34
1

Single-loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions

28 May 2024
Quanqi Hu
Qi Qi
Zhaosong Lu
Tianbao Yang
ArXivPDFHTML
Abstract

In this paper, we study a class of non-smooth non-convex problems in the form of min⁡x[max⁡y∈Yϕ(x,y)−max⁡z∈Zψ(x,z)]\min_{x}[\max_{y\in Y}\phi(x, y) - \max_{z\in Z}\psi(x, z)]minx​[maxy∈Y​ϕ(x,y)−maxz∈Z​ψ(x,z)], where both Φ(x)=max⁡y∈Yϕ(x,y)\Phi(x) = \max_{y\in Y}\phi(x, y)Φ(x)=maxy∈Y​ϕ(x,y) and Ψ(x)=max⁡z∈Zψ(x,z)\Psi(x)=\max_{z\in Z}\psi(x, z)Ψ(x)=maxz∈Z​ψ(x,z) are weakly convex functions, and ϕ(x,y),ψ(x,z)\phi(x, y), \psi(x, z)ϕ(x,y),ψ(x,z) are strongly concave functions in terms of yyy and zzz, respectively. It covers two families of problems that have been studied but are missing single-loop stochastic algorithms, i.e., difference of weakly convex functions and weakly convex strongly-concave min-max problems. We propose a stochastic Moreau envelope approximate gradient method dubbed SMAG, the first single-loop algorithm for solving these problems, and provide a state-of-the-art non-asymptotic convergence rate. The key idea of the design is to compute an approximate gradient of the Moreau envelopes of Φ,Ψ\Phi, \PsiΦ,Ψ using only one step of stochastic gradient update of the primal and dual variables. Empirically, we conduct experiments on positive-unlabeled (PU) learning and partial area under ROC curve (pAUC) optimization with an adversarial fairness regularizer to validate the effectiveness of our proposed algorithms.

View on arXiv
Comments on this paper