In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the Vapnik-Chervonenkis (VC-) dimension of the set of half-spaces in generated by -sparse vectors is bounded below by and above by , plus some round-off terms. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a -sparse vector with measurements, given only the signs of the measurement vector. This result holds for \textit{all} probability measures on . It is further shown that random sign-flipping errors result only in an increase in the constant in the estimate. Because constructing a consistent algorithm is not straight-forward, we present a heuristic based on the -norm support vector machine, and illustrate that its computational performance is superior to a currently popular method.
View on arXiv