Improved Algorithms for Convex-Concave Minimax Optimization

This paper studies minimax optimization problems , where is -strongly convex with respect to , -strongly concave with respect to and -smooth. Zhang et al. provided the following lower bound of the gradient complexity for any first-order method: This paper proposes a new algorithm with gradient complexity upper bound where . This improves over the best known upper bound by Lin et al. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when (i.e., when the interaction between and is weak). Via reduction, our new bound also implies improved bounds for strongly convex-concave and convex-concave minimax optimization problems. When is quadratic, we can further improve the upper bound, which matches the lower bound up to a small sub-polynomial factor.
View on arXiv