230

Alternating minimization and alternating descent over nonconvex sets

Abstract

Many optimization problems in high-dimensional statistics and signal processing involve more than one decision variable to be minimized, where the variables often reflect different structures or components within the signals being considered. Alternating minimization is a widely used method for solving such optimization problems, but the general properties of alternating minimization has not yet been understood well in some settings. In this work, we study and analyze the performance of alternating minimization under the setting where the loss function is optimized over two variables and each variable is restricted to its own constraint set - in particular, we allow these constraints to be potentially nonconvex. Our analysis depends strongly on the notion of local concavity coefficients, which have been recently proposed in Barber and Ha [2017] to measure and quantify the concavity of a general nonconvex set. Our results further reveal important distinctions between alternating and non-alternating methods. Since computing the alternating minimization steps may not be tractable for some problems, we also consider an inexact version of the algorithm and provide a set of sufficient conditions to ensure fast convergence of the inexact algorithms. We demonstrate our framework on two important examples of this type of problem, low rank + sparse decomposition and multitask regression, and provide numerical experiments to validate our theoretical results.

View on arXiv
Comments on this paper