ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.09032
15
1

Solving Quadratic Systems with Full-Rank Matrices Using Sparse or Generative Priors

16 September 2023
Junren Chen
Shuai Huang
Michael K. Ng
Zhaoqiang Liu
ArXivPDFHTML
Abstract

The problem of recovering a signal x∈Rn\boldsymbol{x} \in \mathbb{R}^nx∈Rn from a quadratic system {yi=x⊤Aix, i=1,…,m}\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\ i=1,\ldots,m\}{yi​=x⊤Ai​x, i=1,…,m} with full-rank matrices Ai\boldsymbol{A}_iAi​ frequently arises in applications such as unassigned distance geometry and sub-wavelength imaging. With i.i.d. standard Gaussian matrices Ai\boldsymbol{A}_iAi​, this paper addresses the high-dimensional case where m≪nm\ll nm≪n by incorporating prior knowledge of x\boldsymbol{x}x. First, we consider a kkk-sparse x\boldsymbol{x}x and introduce the thresholded Wirtinger flow (TWF) algorithm that does not require the sparsity level kkk. TWF comprises two steps: the spectral initialization that identifies a point sufficiently close to x\boldsymbol{x}x (up to a sign flip) when m=O(k2log⁡n)m=O(k^2\log n)m=O(k2logn), and the thresholded gradient descent (with a good initialization) that produces a sequence linearly converging to x\boldsymbol{x}x with m=O(klog⁡n)m=O(k\log n)m=O(klogn) measurements. Second, we explore the generative prior, assuming that x\boldsymbol{x}x lies in the range of an LLL-Lipschitz continuous generative model with kkk-dimensional inputs in an ℓ2\ell_2ℓ2​-ball of radius rrr. We develop the projected gradient descent (PGD) algorithm that also comprises two steps: the projected power method that provides an initial vector with O(klog⁡Lm)O\big(\sqrt{\frac{k \log L}{m}}\big)O(mklogL​​) ℓ2\ell_2ℓ2​-error given m=O(klog⁡(Lnr))m=O(k\log(Lnr))m=O(klog(Lnr)) measurements, and the projected gradient descent that refines the ℓ2\ell_2ℓ2​-error to O(δ)O(\delta)O(δ) at a geometric rate when m=O(klog⁡Lrnδ2)m=O(k\log\frac{Lrn}{\delta^2})m=O(klogδ2Lrn​). Experimental results corroborate our theoretical findings and show that: (i) our approach for the sparse case notably outperforms the existing provable algorithm sparse power factorization; (ii) leveraging the generative prior allows for precise image recovery in the MNIST dataset from a small number of quadratic measurements.

View on arXiv
Comments on this paper