Successful applications of sparse models in computer vision and machine learning imply that in many real-world applications, high dimensional data is distributed in a union of low dimensional subspaces. Nevertheless, the underlying structure may be affected by sparse errors and/or outliers. In this paper, we propose a bi-sparse model as a framework to analyze this problem and provide a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We further show the effectiveness of our method by experiments on both synthetic data and real-world vision data.
View on arXiv