25
26

Improved bounds for Square-Root Lasso and Square-Root Slope

Abstract

Extending the results of Bellec, Lecu\é and Tsybakov to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is (s/n)log(p/s)(s/n) \log (p/s), up to some constant, under some mild conditions on the design matrix. Here, nn is the sample size, pp is the dimension and ss is the sparsity parameter. We also prove optimality for the estimation error in the lql_q-norm, with q[1,2]q \in [1,2] for the Square-Root Lasso, and in the l2l_2 and sorted l1l_1 norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity ss of the true parameter. Next, we prove that any estimator depending on ss which attains the minimax rate admits an adaptive to ss version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [Bellec, Lecu\é and Tsybakov, 2017] where the case of known variance is treated. Our results are non-asymptotic.

View on arXiv
Comments on this paper