Fast-rate and optimistic-rate error bounds for L1-regularized regression

We consider the prediction error of linear regression with L1 regularization when the number of covariates p is large relative to the sample size n. When the model is k-sparse and well-specified, and restricted isometry or similar conditions hold, the excess squared-error in prediction can be bounded on the order of sigma^2*(k*log(p)/n), where sigma^2 is the noise variance. Although these conditions are close to necessary for accurate recovery of the true coefficient vector, it is possible to guarantee good predictive accuracy under much milder conditions, avoiding the restricted isometry condition, but only ensuring an excess error bound of order (k*log(p)/n)+sigma*\surd(k*log(p)/n). Here we show that this is indeed the best bound possible (up to logarithmic factors) without introducing stronger assumptions similar to restricted isometry.
View on arXiv