A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-consuming, which render both projected gradient methods and conditional gradient methods (a.k.a. the Frank-Wolfe algorithm) expensive. In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates. We first present a general theory of optimization with only one projection. Its application to smooth optimization with only one projection yields iteration complexity, which can be further reduced under strong convexity and improves over the iteration complexity established before for non-smooth optimization. Then we introduce the local error bound condition and develop faster convergent algorithms for non-strongly convex optimization at the price of a logarithmic number of projections. In particular, we achieve a convergence rate of for non-smooth optimization and for smooth optimization, where is a constant in the local error bound condition. An experiment on solving the constrained minimization problem in compressive sensing demonstrates that the proposed algorithm achieve significant speed-up.
View on arXiv