ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.01206
26
3

Robust Decoding from 1-Bit Compressive Sampling with Least Squares

3 November 2017
Jian Huang
Yuling Jiao
Xiliang Lu
Liping Zhu
    MQ
ArXivPDFHTML
Abstract

In 1-bit compressive sensing (1-bit CS) where target signal is coded into a binary measurement, one goal is to recover the signal from noisy and quantized samples. Mathematically, the 1-bit CS model reads: y=η⊙sign(Ψx∗+ϵ)y = \eta \odot\textrm{sign} (\Psi x^* + \epsilon)y=η⊙sign(Ψx∗+ϵ), where x∗∈Rn,y∈Rmx^{*}\in \mathcal{R}^{n}, y\in \mathcal{R}^{m}x∗∈Rn,y∈Rm, Ψ∈Rm×n\Psi \in \mathcal{R}^{m\times n}Ψ∈Rm×n, and ϵ\epsilonϵ is the random error before quantization and η∈Rn\eta\in \mathcal{R}^{n}η∈Rn is a random vector modeling the sign flips. Due to the presence of nonlinearity, noise and sign flips, it is quite challenging to decode from the 1-bit CS. In this paper, we consider least squares approach under the over-determined and under-determined settings. For m>nm>nm>n, we show that, up to a constant ccc, with high probability, the least squares solution xlsx_{\textrm{ls}}xls​ approximates x∗ x^*x∗ with precision δ\deltaδ as long as m≥O~(nδ2)m \geq\widetilde{\mathcal{O}}(\frac{n}{\delta^2})m≥O(δ2n​). For m<nm< nm<n, we prove that, up to a constant ccc, with high probability, the ℓ1\ell_1ℓ1​-regularized least-squares solution xℓ1x_{\ell_1}xℓ1​​ lies in the ball with center x∗x^*x∗ and radius δ\deltaδ provided that m≥O(slog⁡nδ2)m \geq \mathcal{O}( \frac{s\log n}{\delta^2})m≥O(δ2slogn​) and ∥x∗∥0:=s<m\|x^*\|_0 := s < m∥x∗∥0​:=s<m. We introduce a Newton type method, the so-called primal and dual active set (PDAS) algorithm, to solve the nonsmooth optimization problem. The PDAS possesses the property of one-step convergence. It only requires to solve a small least squares problem on the active set. Therefore, the PDAS is extremely efficient for recovering sparse signals through continuation. We propose a novel regularization parameter selection rule which does not introduce any extra computational overhead. Extensive numerical experiments are presented to illustrate the robustness of our proposed model and the efficiency of our algorithm.

View on arXiv
Comments on this paper