14
38

Information-Theoretic Lower Bounds for Compressive Sensing with Generative Models

Abstract

It has recently been shown that for compressive sensing, significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption the unknown vector lies near the range of a suitably-chosen generative model. In particular, in (Bora {\em et al.}, 2017) it was shown roughly O(klogL)O(k\log L) random Gaussian measurements suffice for accurate recovery when the generative model is an LL-Lipschitz function with bounded kk-dimensional inputs, and O(kdlogw)O(kd \log w) measurements suffice when the generative model is a kk-input ReLU network with depth dd and width ww. In this paper, we establish corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis. In accordance with the above upper bounds, our results are summarized as follows: (i) We construct an LL-Lipschitz generative model capable of generating group-sparse signals, and show that the resulting necessary number of measurements is Ω(klogL)\Omega(k \log L); (ii) Using similar ideas, we construct ReLU networks with high depth and/or high depth for which the necessary number of measurements scales as Ω(kdlogwlogn)\Omega\big( kd \frac{\log w}{\log n}\big) (with output dimension nn), and in some cases Ω(kdlogw)\Omega(kd \log w). As a result, we establish that the scaling laws derived in (Bora {\em et al.}, 2017) are optimal or near-optimal in the absence of further assumptions.

View on arXiv
Comments on this paper