ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.02938
25
19

Lower Bounds for Compressed Sensing with Generative Models

6 December 2019
Akshay Kamath
Sushrut Karmalkar
Eric Price
    GAN
ArXivPDFHTML
Abstract

The goal of compressed sensing is to learn a structured signal xxx from a limited number of noisy linear measurements y≈Axy \approx Axy≈Ax. In traditional compressed sensing, "structure" is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BJPD17} has instead considered structure to come from a generative model G:Rk→RnG: \mathbb{R}^k \to \mathbb{R}^nG:Rk→Rn. We present two results establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~\cite{BJPD17} upper bound for compressed sensing from LLL-Lipschitz generative models GGG. In particular, there exists such a function that requires roughly Ω(klog⁡L)\Omega(k \log L)Ω(klogL) linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network G:R2k→RnG: \mathbb{R}^{2k} \to \mathbb{R}^nG:R2k→Rn with O(1)O(1)O(1) layers and O(kn)O(kn)O(kn) activations per layer, such that the range of GGG contains all kkk-sparse vectors.

View on arXiv
Comments on this paper