ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0801.0345
121
216

Near-ideal model selection by ℓ1\ell_1ℓ1​ minimization

2 January 2008
Emmanuel J. Candès
Y. Plan
ArXivPDFHTML
Abstract

We consider the fundamental problem of estimating the mean of a vector y=Xβ+zy=X\beta+zy=Xβ+z, where XXX is an n×pn\times pn×p design matrix in which one can have far more variables than observations, and zzz is a stochastic error term--the so-called "p>np>np>n" setup. When β\betaβ is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate XβX\betaXβ using a computationally tractable algorithm. We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where XβX\betaXβ is a sparse or nearly sparse superposition of variables, but not in all cases. Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear.

View on arXiv
Comments on this paper