ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.07973
24
10

An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory

22 October 2017
M. Ahsen
M. Vidyasagar
ArXivPDFHTML
Abstract

In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the Vapnik-Chervonenkis (VC-) dimension of the set of half-spaces in Rn\mathbb{R}^nRn generated by kkk-sparse vectors is bounded below by klg⁡(n/k)k \lg (n/k)klg(n/k) and above by 2klg⁡(n/k)2k \lg (n/k)2klg(n/k), plus some round-off terms. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a kkk-sparse vector with O(klg⁡(n/k))O(k \lg (n/k))O(klg(n/k)) measurements, given only the signs of the measurement vector. This result holds for \textit{all} probability measures on Rn\mathbb{R}^nRn. It is further shown that random sign-flipping errors result only in an increase in the constant in the O(klg⁡(n/k))O(k \lg (n/k))O(klg(n/k)) estimate. Because constructing a consistent algorithm is not straight-forward, we present a heuristic based on the ℓ1\ell_1ℓ1​-norm support vector machine, and illustrate that its computational performance is superior to a currently popular method.

View on arXiv
Comments on this paper