11
0

Avoiding strict saddle points of nonconvex regularized problems

Abstract

In this paper, we consider a class of non-convex and non-smooth sparse optimization problems, which encompass most existing nonconvex sparsity-inducing terms. We show the second-order optimality conditions only depend on the nonzeros of the stationary points. We propose two damped iterative reweighted algorithms including the iteratively reweighted 1\ell_1 algorithm (DIRL1_1) and the iteratively reweighted 2\ell_2 (DIRL2_2) algorithm, to solve these problems. For DIRL1_1, we show the reweighted 1\ell_1 subproblem has support identification property so that DIRL1_1 locally reverts to a gradient descent algorithm around a stationary point. For DIRL2_2, we show the solution map of the reweighted 2\ell_2 subproblem is differentiable and Lipschitz continuous everywhere. Therefore, the map of DIRL1_1 and DIRL2_2 and their inverse are Lipschitz continuous, and the strict saddle points are their unstable fixed points. By applying the stable manifold theorem, these algorithms are shown to converge only to local minimizers with randomly initialization when the strictly saddle point property is assumed.

View on arXiv
Comments on this paper