ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.05604
22
62

Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity

10 December 2021
Junchi Yang
Antonio Orvieto
Aurelien Lucchi
Niao He
ArXivPDFHTML
Abstract

Gradient descent ascent (GDA), the simplest single-loop algorithm for nonconvex minimax optimization, is widely used in practical applications such as generative adversarial networks (GANs) and adversarial training. Albeit its desirable simplicity, recent work shows inferior convergence rates of GDA in theory even assuming strong concavity of the objective on one side. This paper establishes new convergence results for two alternative single-loop algorithms -- alternating GDA and smoothed GDA -- under the mild assumption that the objective satisfies the Polyak-Lojasiewicz (PL) condition about one variable. We prove that, to find an ϵ\epsilonϵ-stationary point, (i) alternating GDA and its stochastic variant (without mini batch) respectively require O(κ2ϵ−2)O(\kappa^{2} \epsilon^{-2})O(κ2ϵ−2) and O(κ4ϵ−4)O(\kappa^{4} \epsilon^{-4})O(κ4ϵ−4) iterations, while (ii) smoothed GDA and its stochastic variant (without mini batch) respectively require O(κϵ−2)O(\kappa \epsilon^{-2})O(κϵ−2) and O(κ2ϵ−4)O(\kappa^{2} \epsilon^{-4})O(κ2ϵ−4) iterations. The latter greatly improves over the vanilla GDA and gives the hitherto best known complexity results among single-loop algorithms under similar settings. We further showcase the empirical efficiency of these algorithms in training GANs and robust nonlinear regression.

View on arXiv
Comments on this paper