6
7

Saddle Point Optimization with Approximate Minimization Oracle

Abstract

A major approach to saddle point optimization minxmaxyf(x,y)\min_x\max_y f(x, y) is a gradient based approach as is popularized by generative adversarial networks (GANs). In contrast, we analyze an alternative approach relying only on an oracle that solves a minimization problem approximately. Our approach locates approximate solutions xx' and yy' to minxf(x,y)\min_{x'}f(x', y) and maxyf(x,y)\max_{y'}f(x, y') at a given point (x,y)(x, y) and updates (x,y)(x, y) toward these approximate solutions (x,y)(x', y') with a learning rate η\eta. On locally strong convex--concave smooth functions, we derive conditions on η\eta to exhibit linear convergence to a local saddle point, which reveals a possible shortcoming of recently developed robust adversarial reinforcement learning algorithms. We develop a heuristic approach to adapt η\eta derivative-free and implement zero-order and first-order minimization algorithms. Numerical experiments are conducted to show the tightness of the theoretical results as well as the usefulness of the η\eta adaptation mechanism.

View on arXiv
Comments on this paper