119
166

When Are Nonconvex Problems Not Scary?

Abstract

In this paper, we focus on nonconvex optimization problems with no "spurious" local minimizers, and with saddle points of at most second-order. Concrete applications such as dictionary learning, phase retrieval, and tensor decomposition are known to induce such structures. We describe a second-order trust-region algorithm that provably converges to a local minimizer in polynomial time. Finally we highlight alternatives, and open problems in this direction.

View on arXiv
Comments on this paper