25
0

When is sparse dictionary learning well-posed?

Abstract

Dictionary learning methods for sparse coding have exposed underlying structure in many kinds of natural signals. However, universal theorems guaranteeing the statistical consistency of inference in this model are lacking. Here, we prove that for almost all diverse enough datasets generated from the model, latent dictionaries and sparse codes are uniquely identifiable up to an error commensurate with measurement noise. Applications are given to data analysis, neuroscience, and engineering.

View on arXiv
Comments on this paper