671
v1v2v3v4v5 (latest)

Quasi-Newton Methods for Saddle Point Problems and Beyond

Neural Information Processing Systems (NeurIPS), 2021
Abstract

This paper studies quasi-Newton methods for solving strongly-convex-strongly-concave saddle point problems (SPP). We propose greedy and random Broyden family updates for SPP, which have explicit local superlinear convergence rate of O((11nκ2)k(k1)/2){\mathcal O}\big(\big(1-\frac{1}{n\kappa^2}\big)^{k(k-1)/2}\big), where nn is dimensions of the problem, κ\kappa is the condition number and kk is the number of iterations. The design and analysis of proposed algorithm are based on estimating the square of indefinite Hessian matrix, which is different from classical quasi-Newton methods in convex optimization. We also present two specific Broyden family algorithms with BFGS-type and SR1-type updates, which enjoy the faster local convergence rate of O((11n)k(k1)/2)\mathcal O\big(\big(1-\frac{1}{n}\big)^{k(k-1)/2}\big). Additionally, we extend our algorithms to solve general nonlinear equations and prove it enjoys the similar convergence rate.

View on arXiv
Comments on this paper