Improved bounds for Square-Root Lasso and Square-Root Slope

Extending the results of Bellec, Lecu\é and Tsybakov to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is , up to some constant, under some mild conditions on the design matrix. Here, is the sample size, is the dimension and is the sparsity parameter. We also prove optimality for the estimation error in the -norm, with for the Square-Root Lasso, and in the and sorted norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity of the true parameter. Next, we prove that any estimator depending on which attains the minimax rate admits an adaptive to version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [Bellec, Lecu\é and Tsybakov, 2017] where the case of known variance is treated. Our results are non-asymptotic.
View on arXiv