ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1311.0830
31
131

The Squared-Error of Generalized LASSO: A Precise Analysis

4 November 2013
Samet Oymak
Christos Thrampoulidis
B. Hassibi
ArXivPDFHTML
Abstract

We consider the problem of estimating an unknown signal x0x_0x0​ from noisy linear observations y=Ax0+z∈Rmy = Ax_0 + z\in R^my=Ax0​+z∈Rm. In many practical instances, x0x_0x0​ has a certain structure that can be captured by a structure inducing convex function f(⋅)f(\cdot)f(⋅). For example, ℓ1\ell_1ℓ1​ norm can be used to encourage a sparse solution. To estimate x0x_0x0​ with the aid of f(⋅)f(\cdot)f(⋅), we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix AAA and the noise vector zzz have zero-mean normal distributions with variances 111 and σ2\sigma^2σ2 respectively. For the LASSO estimator x∗x^*x∗, we attempt to calculate the Normalized Square Error (NSE) defined as ∥x∗−x0∥22σ2\frac{\|x^*-x_0\|_2^2}{\sigma^2}σ2∥x∗−x0​∥22​​ as a function of the noise level σ\sigmaσ, the number of observations mmm and the structure of the signal. We show that, the structure of the signal x0x_0x0​ and choice of the function f(⋅)f(\cdot)f(⋅) enter the error formulae through the summary parameters D(cone)D(cone)D(cone) and D(λ)D(\lambda)D(λ), which are defined as the Gaussian squared-distances to the subdifferential cone and to the λ\lambdaλ-scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of f(x0)f(x_0)f(x0​) and is given by arg⁡min⁡x{∥y−Ax∥2 subject to f(x)≤f(x0)}\arg\min_{x}\{{\|y-Ax\|_2}~\text{subject to}~f(x)\leq f(x_0)\}argminx​{∥y−Ax∥2​ subject to f(x)≤f(x0​)}. We prove that its worst case NSE is achieved when σ→0\sigma\rightarrow 0σ→0 and concentrates around D(cone)m−D(cone)\frac{D(cone)}{m-D(cone)}m−D(cone)D(cone)​. Secondly, we consider arg⁡min⁡x{∥y−Ax∥2+λf(x)}\arg\min_{x}\{\|y-Ax\|_2+\lambda f(x)\}argminx​{∥y−Ax∥2​+λf(x)}, for some λ≥0\lambda\geq 0λ≥0. This time the NSE formula depends on the choice of λ\lambdaλ and is given by D(λ)m−D(λ)\frac{D(\lambda)}{m-D(\lambda)}m−D(λ)D(λ)​. We then establish a mapping between this and the third estimator arg⁡min⁡x{12∥y−Ax∥22+λf(x)}\arg\min_{x}\{\frac{1}{2}\|y-Ax\|_2^2+ \lambda f(x)\}argminx​{21​∥y−Ax∥22​+λf(x)}. Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.

View on arXiv
Comments on this paper