322

Robust One-Bit Recovery via ReLU Generative Networks: Near Optimal Statistical Rate and Global Landscape Analysis

Abstract

We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector θ0Rd\theta_0\in\mathbb{R}^d uniformly mm quantized noisy measurements. Under the assumption that the measurements are sub-Gaussian random vectors, to recover any kk-sparse θ0\theta_0 (kdk\ll d) uniformly up to an error ε\varepsilon with high probability, the best known computationally tractable algorithm requires mO~(klogd/ε4)m\geq\tilde{\mathcal{O}}(k\log d/\varepsilon^4) measurements. In this paper, we consider a new framework for the one-bit sensing problem where the sparsity is implicitly enforced via mapping a low dimensional representation x0Rkx_0 \in \mathbb{R}^k through a known nn-layer ReLU generative network G:RkRdG:\mathbb{R}^k\rightarrow\mathbb{R}^d. Such a framework poses low-dimensional priors on θ0\theta_0 without a known basis. We propose to recover the target G(x0)G(x_0) via an unconstrained empirical risk minimization (ERM) problem under a much weaker sub-exponential measurement assumption. For such a problem, we establish a joint statistical and computational analysis. In particular, we prove that the ERM estimator in this new framework achieves a statistical rate of m=O~(knlogd/ε2)m=\tilde{\mathcal{O}}(kn \log d /\varepsilon^2) recovering any G(x0)G(x_0) uniformly up to an error ε\varepsilon. When network is shallow (i.e., nn is small), we show this rate matches the information-theoretic lower bound up to logarithm factors on ε1\varepsilon^{-1}. From the lens of computation, despite non-convexity, we prove that the objective of our ERM problem has no spurious stationary point, that is, any stationary point are equally good for recovering the true target up to scaling with a certain accuracy.

View on arXiv
Comments on this paper