63
27

Controlling Space and Time with Diffusion Models

Abstract

We present 4DiM, a cascaded diffusion model for 4D novel view synthesis (NVS), supporting generation with arbitrary camera trajectories and timestamps, in natural scenes, conditioned on one or more images. With a novel architecture and sampling procedure, we enable training on a mixture of 3D (with camera pose), 4D (pose+time) and video (time but no pose) data, which greatly improves generalization to unseen images and camera pose trajectories over prior works that focus on limited domains (e.g., object centric). 4DiM is the first-ever NVS method with intuitive metric-scale camera pose control enabled by our novel calibration pipeline for structure-from-motion-posed data. Experiments demonstrate that 4DiM outperforms prior 3D NVS models both in terms of image fidelity and pose alignment, while also enabling the generation of scene dynamics. 4DiM provides a general framework for a variety of tasks including single-image-to-3D, two-image-to-video (interpolation and extrapolation), and pose-conditioned video-to-video translation, which we illustrate qualitatively on a variety of scenes. For an overview seethis https URL

View on arXiv
@article{watson2025_2407.07860,
  title={ Controlling Space and Time with Diffusion Models },
  author={ Daniel Watson and Saurabh Saxena and Lala Li and Andrea Tagliasacchi and David J. Fleet },
  journal={arXiv preprint arXiv:2407.07860},
  year={ 2025 }
}
Comments on this paper