Dictionary Learning with Few Samples and Matrix Concentration

Let be an matrix, be an matrix and . A challenging and important problem in data analysis, motivated by dictionary learning and other practical problems, is to recover both and , given . Under normal circumstances, it is clear that this problem is underdetermined. However, in the case when is sparse and random, Spielman, Wang and Wright showed that one can recover both and efficiently from with high probability, given that (the number of samples) is sufficiently large. Their method works for and they conjectured that suffices. The bound is sharp for an obvious information theoretical reason. In this paper, we show that suffices, matching the conjectural bound up to a polylogarithmic factor. The core of our proof is a theorem concerning concentration of random matrices, which is of independent interest. Our proof of the concentration result is based on two ideas. The first is an economical way to apply the union bound. The second is a refined version of Bernstein's concentration inequality for the sum of independent variables. Both have nothing to do with random matrices and are applicable in general settings.
View on arXiv