ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.05046
10
43

The All-or-Nothing Phenomenon in Sparse Linear Regression

12 March 2019
Galen Reeves
Jiaming Xu
Ilias Zadik
ArXivPDFHTML
Abstract

We study the problem of recovering a hidden binary kkk-sparse ppp-dimensional vector β\betaβ from nnn noisy linear observations Y=Xβ+WY=X\beta+WY=Xβ+W where XijX_{ij}Xij​ are i.i.d. N(0,1)\mathcal{N}(0,1)N(0,1) and WiW_iWi​ are i.i.d. N(0,σ2)\mathcal{N}(0,\sigma^2)N(0,σ2). A closely related hypothesis testing problem is to distinguish the pair (X,Y)(X,Y)(X,Y) generated from this structured model from a corresponding null model where (X,Y)(X,Y)(X,Y) consist of purely independent Gaussian entries. In the low sparsity k=o(p)k=o(p)k=o(p) and high signal to noise ratio k/σ2=Ω(1)k/\sigma^2=\Omega\left(1\right)k/σ2=Ω(1) regime, we establish an `All-or-Nothing' information-theoretic phase transition at a critical sample size n∗=2klog⁡(p/k)/log⁡(1+k/σ2)n^*=2 k\log \left(p/k\right) /\log \left(1+k/\sigma^2\right)n∗=2klog(p/k)/log(1+k/σ2), resolving a conjecture of \cite{gamarnikzadik}. Specifically, we show that if lim inf⁡p→∞n/n∗>1\liminf_{p\to \infty} n/n^*>1liminfp→∞​n/n∗>1, then the maximum likelihood estimator almost perfectly recovers the hidden vector with high probability and moreover the true hypothesis can be detected with a vanishing error probability. Conversely, if lim sup⁡p→∞n/n∗<1\limsup_{p\to \infty} n/n^*<1limsupp→∞​n/n∗<1, then it becomes information-theoretically impossible even to recover an arbitrarily small but fixed fraction of the hidden vector support, or to test hypotheses strictly better than random guess. Our proof of the impossibility result builds upon two key techniques, which could be of independent interest. First, we use a conditional second moment method to upper bound the Kullback-Leibler (KL) divergence between the structured and the null model. Second, inspired by the celebrated area theorem, we establish a lower bound to the minimum mean squared estimation error of the hidden vector in terms of the KL divergence between the two models.

View on arXiv
Comments on this paper