STeP: A Framework for Solving Scientific Video Inverse Problems with Spatiotemporal Diffusion Priors
- DiffMVGen

Reconstructing spatially and temporally coherent videos from time-varying measurements is a fundamental challenge in many scientific domains. A major difficulty arises from the sparsity of measurements, which hinders accurate recovery of temporal dynamics. Existing image diffusion-based methods rely on extracting temporal consistency directly from measurements, limiting their effectiveness on scientific tasks with high spatiotemporal uncertainty. We address this difficulty by proposing a plug-and-play framework that incorporates a learned spatiotemporal diffusion prior. Due to its plug-and-play nature, our framework can be flexibly applied to different video inverse problems without the need for task-specific design and temporal heuristics. We further demonstrate that a spatiotemporal diffusion model can be trained efficiently with limited video data. We validate our approach on two challenging scientific video reconstruction tasks: black hole video reconstruction and dynamic MRI. While baseline methods struggle to provide temporally coherent reconstructions, our approach achieves significantly improved recovery of the spatiotemporal structure of the underlying ground truth videos.
View on arXiv@article{zhang2025_2504.07549, title={ STeP: A Framework for Solving Scientific Video Inverse Problems with Spatiotemporal Diffusion Priors }, author={ Bingliang Zhang and Zihui Wu and Berthy T. Feng and Yang Song and Yisong Yue and Katherine L. Bouman }, journal={arXiv preprint arXiv:2504.07549}, year={ 2025 } }