Fast sampling in a linear-Gaussian inverse problem

We solve the inverse problem of deblurring a pixelized image of Jupiter using regularized deconvolution and by sample-based Bayesian inference. By efficiently sampling the marginal posterior distribution for hyperparameters, then the full conditional for the deblurred image, we find that we can evaluate the posterior mean faster than regularized inversion, when selection of the regularizing parameter is considered. To our knowledge, this is the first demonstration of sampling and inference that takes less compute time than regularized inversion in an inverse problems. Comparison to random-walk Metropolis-Hastings and block Gibbs MCMC shows that marginal then conditional sampling also outperforms these more common sampling algorithms, having better scaling with problem size. When problem-specific computations are feasible the asymptotic cost of an independent sample is one linear solve, implying that sample-based Bayesian inference may be performed directly over function spaces, when that limit exists.
View on arXiv