We address the issue of estimating the regression vector in the generic -sparse linear model , with , , and when the variance is unknown. We study two LASSO-type methods that jointly estimate and the variance. These estimators are minimizers of the penalized least-squares functional, where the relaxation parameter is tuned according to two different strategies. In the first strategy, the relaxation parameter is of the order , where is the empirical variance. %The resulting optimization problem can be solved by running only a few successive LASSO instances with %recursive updating of the relaxation parameter. In the second strategy, the relaxation parameter is chosen so as to enforce a trade-off between the fidelity and the penalty terms at optimality. For both estimators, our assumptions are similar to the ones proposed by Cand\`es and Plan in {\it Ann. Stat. (2009)}, for the case where is known. We prove that our estimators ensure exact recovery of the support and sign pattern of with high probability. We present simulations results showing that the first estimator enjoys nearly the same performances in practice as the standard LASSO (known variance case) for a wide range of the signal to noise ratio. Our second estimator is shown to outperform both in terms of false detection, when the signal to noise ratio is low.
View on arXiv