16
3

Sparse Bayesian Lasso via a Variable-Coefficient 1\ell_1 Penalty

Abstract

Modern statistical learning algorithms are capable of amazing flexibility, but struggle with interpretability. One possible solution is sparsity: making inference such that many of the parameters are estimated as being identically 0, which may be imposed through the use of nonsmooth penalties such as the 1\ell_1 penalty. However, the 1\ell_1 penalty introduces significant bias when high sparsity is desired. In this article, we retain the 1\ell_1 penalty, but define learnable penalty weights λp\lambda_p endowed with hyperpriors. We start the article by investigating the optimization problem this poses, developing a proximal operator associated with the 1\ell_1 norm. We then study the theoretical properties of this variable-coefficient 1\ell_1 penalty in the context of penalized likelihood. Next, we investigate application of this penalty to Variational Bayes, developing a model we call the Sparse Bayesian Lasso which allows for behavior qualitatively like Lasso regression to be applied to arbitrary variational models. In simulation studies, this gives us the Uncertainty Quantification and low bias properties of simulation-based approaches with an order of magnitude less computation. Finally, we apply our methodology to a Bayesian lagged spatiotemporal regression model of internal displacement that occurred during the Iraqi Civil War of 2013-2017.

View on arXiv
Comments on this paper