Faster Uncertainty Quantification for Inverse Problems with Conditional Normalizing Flows

In inverse problems, we often have access to data consisting of paired samples where are partial observations of a physical system, and represents the unknowns of the problem. Under these circumstances, we can employ supervised training to learn a solution and its uncertainty from the observations . We refer to this problem as the "supervised" case. However, the data collected at one point could be distributed differently than observations , relevant for a current set of problems. In the context of Bayesian inference, we propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator to approximate the target posterior density . Additionally, this preliminary phase provides a density function , which can be recast as a prior for the "unsupervised" problem, e.g.~when only the observations , a likelihood model , and a prior on are known. We then train another invertible generator with output density specifically for , allowing us to sample from the posterior . We present some synthetic results that demonstrate considerable training speedup when reusing the pretrained network as a warm start or preconditioning for approximating , instead of learning from scratch. This training modality can be interpreted as an instance of transfer learning. This result is particularly relevant for large-scale inverse problems that employ expensive numerical simulations.
View on arXiv