Fast MCMC sampling for sparse Bayesian inference in high-dimensional
inverse problems using L1-type priors
Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion: Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. The most commonly applied Markov chain Monte Carlo (MCMC) sampling algorithms for this purpose are Metropolis-Hastings (MH) schemes. However, we demonstrate in this article that for sparse priors relying on L1-norms, their efficiency dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using these samplers is not feasible at all. We therefore develop a sampling algorithm that relies on single component Gibbs sampling. We show that the efficiency of our Gibbs sampler even increases when the level of sparsity or the dimension of the unknowns is increased. This property is not only distinct to the MH schemes but also challenges common beliefs about MCMC sampling.
View on arXiv