25
9

Diffusion Posterior Sampling is Computationally Intractable

Abstract

Diffusion models are a remarkably effective way of learning and sampling from a distribution p(x)p(x). In posterior sampling, one is also given a measurement model p(yx)p(y \mid x) and a measurement yy, and would like to sample from p(xy)p(x \mid y). Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction, so a number of recent works have given algorithms to heuristically approximate it; but none are known to converge to the correct distribution in polynomial time. In this paper we show that posterior sampling is \emph{computationally intractable}: under the most basic assumption in cryptography -- that one-way functions exist -- there are instances for which \emph{every} algorithm takes superpolynomial time, even though \emph{unconditional} sampling is provably fast. We also show that the exponential-time rejection sampling algorithm is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.

View on arXiv
Comments on this paper