611

Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions

International Conference on Machine Learning (ICML), 2011
Abstract

We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are the noisy realizations of the sum of an (approximately) low rank matrix Θ\Theta^\star with a second matrix Γ\Gamma^\star endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including forms of factor analysis, multi-task regression with shared structure, and robust covariance estimation. We derive a general theorem that gives upper bounds on the Frobenius norm error for an estimate of the pair (Θ,Γ)(\Theta^\star, \Gamma^\star) obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results are based on imposing a "spikiness" condition that is related to but milder than singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields non-asymptotic Frobenius error bounds for both for deterministic and stochastic noise matrices, and applies to matrices Θ\Theta^\star that can be exactly or approximately low rank, and matrices Γ\Gamma^\star that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices, we establish matching lower bounds on the minimax error, showing that our results cannot be improved beyond constant factors. The sharpness of our theoretical predictions is confirmed by numerical simulations.

View on arXiv
Comments on this paper