We give a new framework for solving the fundamental problem of low-rank matrix completion, i.e., approximating a rank- matrix (where ) from random observations. First, we provide an algorithm which completes on of rows and columns under no further assumptions on from samples and using time. Then, assuming the row and column spans of satisfy additional regularity properties, we show how to boost this partial completion guarantee to a full matrix completion algorithm by aggregating solutions to regression problems involving the observations. In the well-studied setting where has incoherent row and column spans, our algorithms complete to high precision from observations in time (omitting logarithmic factors in problem parameters), improving upon the prior state-of-the-art [JN15] which used samples and time. Under an assumption on the row and column spans of we introduce (which is satisfied by random subspaces with high probability), our sample complexity improves to an almost information-theoretically optimal , and our runtime improves to . Our runtimes have the appealing property of matching the best known runtime to verify that a rank- decomposition agrees with the sampled observations. We also provide robust variants of our algorithms that, given random observations from with , complete to Frobenius norm distance in the same runtimes as the noiseless setting. Prior noisy matrix completion algorithms [CP10] only guaranteed a distance of .
View on arXiv