ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.06358
6
1

Optimization-Based MCMC Methods for Nonlinear Hierarchical Statistical Inverse Problems

15 February 2020
Johnathan M. Bardsley
Tiangang Cui
ArXivPDFHTML
Abstract

In many hierarchical inverse problems, not only do we want to estimate high- or infinite-dimensional model parameters in the parameter-to-observable maps, but we also have to estimate hyperparameters that represent critical assumptions in the statistical and mathematical modeling processes. As a joint effect of high-dimensionality, nonlinear dependence, and non-concave structures in the joint posterior posterior distribution over model parameters and hyperparameters, solving inverse problems in the hierarchical Bayesian setting poses a significant computational challenge. In this work, we aim to develop scalable optimization-based Markov chain Monte Carlo (MCMC) methods for solving hierarchical Bayesian inverse problems with nonlinear parameter-to-observable maps and a broader class of hyperparameters. Our algorithmic development is based on the recently developed scalable randomize-then-optimize (RTO) method [4] for exploring the high- or infinite-dimensional model parameter space. By using RTO either as a proposal distribution in a Metropolis-within-Gibbs update or as a biasing distribution in the pseudo-marginal MCMC [2], we are able to design efficient sampling tools for hierarchical Bayesian inversion. In particular, the integration of RTO and the pseudo-marginal MCMC has sampling performance robust to model parameter dimensions. We also extend our methods to nonlinear inverse problems with Poisson-distributed measurements. Numerical examples in PDE-constrained inverse problems and positron emission tomography (PET) are used to demonstrate the performance of our methods.

View on arXiv
Comments on this paper