32

Novel View Synthesis as Video Completion

Qi Wu
Khiem Vuong
Minsik Jeon
Srinivasa Narasimhan
Deva Ramanan
Main:14 Pages
6 Figures
Bibliography:4 Pages
6 Tables
Appendix:4 Pages
Abstract

We tackle the problem of sparse novel view synthesis (NVS) using video diffusion models; given KK (5\approx 5) multi-view images of a scene and their camera poses, we predict the view from a target camera pose. Many prior approaches leverage generative image priors encoded via diffusion models. However, models trained on single images lack multi-view knowledge. We instead argue that video models already contain implicit multi-view knowledge and so should be easier to adapt for NVS. Our key insight is to formulate sparse NVS as a low frame-rate video completion task. However, one challenge is that sparse NVS is defined over an unordered set of inputs, often too sparse to admit a meaningful order, so the models should be invariant\textit{invariant} to permutations of that input set. To this end, we present FrameCrafter, which adapts video models (naturally trained with coherent frame orderings) to permutation-invariant NVS through several architectural modifications, including per-frame latent encodings and removal of temporal positional embeddings. Our results suggest that video models can be easily trained to "forget" about time with minimal supervision, producing competitive performance on sparse-view NVS benchmarks. Project page:this https URL

View on arXiv
Comments on this paper