Nearly optimal Bayesian Shrinkage for High Dimensional Regression

During the past decade, shrinkage priors have received much attention in Bayesian analysis of high-dimensional data. This paper establishes the posterior consistency for high-dimensional linear regression with a class of shrinkage priors, which has a heavy and flat tail and allocates a sufficiently large probability mass in a very small neighborhood of zero. While enjoying its efficiency in posterior simulations, the shrinkage prior can lead to a nearly optimal posterior contraction rate and variable selection consistency as the spike-and-slab prior. Our numerical results show that under the posterior consistency, Bayesian methods can yield much better results in variable selection than the regularization methods such as Lasso and SCAD. This paper also establishes a Bernstein von-Mises type result, which leads to a convenient way of uncertainty quantification for regression coefficient estimates.
View on arXiv