ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03950
16
11

Nonconvex-Nonconcave Min-Max Optimization with a Small Maximization Domain

8 October 2021
Dmitrii Ostrovskii
Babak Barazandeh
Meisam Razaviyayn
ArXivPDFHTML
Abstract

We study the problem of finding approximate first-order stationary points in optimization problems of the form min⁡x∈Xmax⁡y∈Yf(x,y)\min_{x \in X} \max_{y \in Y} f(x,y)minx∈X​maxy∈Y​f(x,y), where the sets X,YX,YX,Y are convex and YYY is compact. The objective function fff is smooth, but assumed neither convex in xxx nor concave in yyy. Our approach relies upon replacing the function f(x,⋅)f(x,\cdot)f(x,⋅) with its kkkth order Taylor approximation (in yyy) and finding a near-stationary point in the resulting surrogate problem. To guarantee its success, we establish the following result: let the Euclidean diameter of YYY be small in terms of the target accuracy ε\varepsilonε, namely O(ε2k+1)O(\varepsilon^{\frac{2}{k+1}})O(εk+12​) for k∈Nk \in \mathbb{N}k∈N and O(ε)O(\varepsilon)O(ε) for k=0k = 0k=0, with the constant factors controlled by certain regularity parameters of fff; then any ε\varepsilonε-stationary point in the surrogate problem remains O(ε)O(\varepsilon)O(ε)-stationary for the initial problem. Moreover, we show that these upper bounds are nearly optimal: the aforementioned reduction provably fails when the diameter of YYY is larger. For 0≤k≤20 \le k \le 20≤k≤2 the surrogate function can be efficiently maximized in yyy; our general approximation result then leads to efficient algorithms for finding a near-stationary point in nonconvex-nonconcave min-max problems, for which we also provide convergence guarantees.

View on arXiv
Comments on this paper