38
55

More Algorithms for Provable Dictionary Learning

Abstract

In dictionary learning, also known as sparse coding, the algorithm is given samples of the form y=Axy = Ax where xRmx\in \mathbb{R}^m is an unknown random sparse vector and AA is an unknown dictionary matrix in Rn×m\mathbb{R}^{n\times m} (usually m>nm > n, which is the overcomplete case). The goal is to learn AA and xx. This problem has been studied in neuroscience, machine learning, visions, and image processing. In practice it is solved by heuristic algorithms and provable algorithms seemed hard to find. Recently, provable algorithms were found that work if the unknown feature vector xx is n\sqrt{n}-sparse or even sparser. Spielman et al. \cite{DBLP:journals/jmlr/SpielmanWW12} did this for dictionaries where m=nm=n; Arora et al. \cite{AGM} gave an algorithm for overcomplete (m>nm >n) and incoherent matrices AA; and Agarwal et al. \cite{DBLP:journals/corr/AgarwalAN13} handled a similar case but with weaker guarantees. This raised the problem of designing provable algorithms that allow sparsity n\gg \sqrt{n} in the hidden vector xx. The current paper designs algorithms that allow sparsity up to n/poly(logn)n/poly(\log n). It works for a class of matrices where features are individually recoverable, a new notion identified in this paper that may motivate further work. The algorithm runs in quasipolynomial time because they use limited enumeration.

View on arXiv
Comments on this paper