ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.01230
14
18

The noise barrier and the large signal bias of the Lasso and other convex estimators

4 April 2018
Pierre C. Bellec
ArXivPDFHTML
Abstract

Convex estimators such as the Lasso, the matrix Lasso and the group Lasso have been studied extensively in the last two decades, demonstrating great success in both theory and practice. Two quantities are introduced, the noise barrier and the large scale bias, that provides insights on the performance of these convex regularized estimators. It is now well understood that the Lasso achieves fast prediction rates, provided that the correlations of the design satisfy some Restricted Eigenvalue or Compatibility condition, and provided that the tuning parameter is large enough. Using the two quantities introduced in the paper, we show that the compatibility condition on the design matrix is actually unavoidable to achieve fast prediction rates with the Lasso. The Lasso must incur a loss due to the correlations of the design matrix, measured in terms of the compatibility constant. This results holds for any design matrix, any active subset of covariates, and any tuning parameter. It is now well known that the Lasso enjoys a dimension reduction property: the prediction error is of order λk\lambda\sqrt kλk​ where kkk is the sparsity; even if the ambient dimension ppp is much larger than kkk. Such results require that the tuning parameters is greater than some universal threshold. We characterize sharp phase transitions for the tuning parameter of the Lasso around a critical threshold dependent on kkk. If λ\lambdaλ is equal or larger than this critical threshold, the Lasso is minimax over kkk-sparse target vectors. If λ\lambdaλ is equal or smaller than critical threshold, the Lasso incurs a loss of order σk\sigma\sqrt kσk​ --which corresponds to a model of size kkk-- even if the target vector has fewer than kkk nonzero coefficients. Remarkably, the lower bounds obtained in the paper also apply to random, data-driven tuning parameters. The results extend to convex penalties beyond the Lasso.

View on arXiv
Comments on this paper