10
1

Non-Convex Compressed Sensing with Training Data

Abstract

Efficient algorithms for the sparse solution of under-determined linear systems Ax=bAx = b are known for matrices AA satisfying suitable assumptions like the restricted isometry property (RIP). Without such assumptions little is known and without any assumptions on AA the problem is NPNP-hard. A common approach is to replace 1\ell_1 by p\ell_p minimization for 0<p<10 < p < 1, which is no longer convex and typically requires some form of local initial values for provably convergent algorithms. In this paper, we consider an alternative, where instead of suitable initial values we are provided with extra training problems Ax=BlAx = B_l, l=1,,pl=1, \dots, p that are related to our compressed sensing problem. They allow us to find the solution of the original problem Ax=bAx = b with high probability in the range of a one layer linear neural network with comparatively few assumptions on the matrix AA.

View on arXiv
Comments on this paper