0

Weak Diffusion Priors Can Still Achieve Strong Inverse-Problem Performance

Jing Jia
Wei Yuan
Sifan Liu
Liyue Shen
Guanyang Wang
Main:20 Pages
17 Figures
Bibliography:1 Pages
14 Tables
Appendix:24 Pages
Abstract

Can a diffusion model trained on bedrooms recover human faces? Diffusion models are widely used as priors for inverse problems, but standard approaches usually assume a high-fidelity model trained on data that closely match the unknown signal. In practice, one often must use a mismatched or low-fidelity diffusion prior. Surprisingly, these weak priors often perform nearly as well as full-strength, in-domain baselines. We study when and why inverse solvers are robust to weak diffusion priors. Through extensive experiments, we find that weak priors succeed when measurements are highly informative (e.g., many observed pixels), and we identify regimes where they fail. Our theory, based on Bayesian consistency, gives conditions under which high-dimensional measurements make the posterior concentrate near the true signal. These results provide a principled justification on when weak diffusion priors can be used reliably.

View on arXiv
Comments on this paper