0

VideoGPA: Distilling Geometry Priors for 3D-Consistent Video Generation

Hongyang Du
Junjie Ye
Xiaoyan Cong
Runhao Li
Jingcheng Ni
Aman Agarwal
Zeqi Zhou
Zekun Li
Randall Balestriero
Yue Wang
Main:8 Pages
7 Figures
Bibliography:3 Pages
6 Tables
Appendix:21 Pages
Abstract

While recent video diffusion models (VDMs) produce visually impressive results, they fundamentally struggle to maintain 3D structural consistency, often resulting in object deformation or spatial drift. We hypothesize that these failures arise because standard denoising objectives lack explicit incentives for geometric coherence. To address this, we introduce VideoGPA (Video Geometric Preference Alignment), a data-efficient self-supervised framework that leverages a geometry foundation model to automatically derive dense preference signals that guide VDMs via Direct Preference Optimization (DPO). This approach effectively steers the generative distribution toward inherent 3D consistency without requiring human annotations. VideoGPA significantly enhances temporal stability, physical plausibility, and motion coherence using minimal preference pairs, consistently outperforming state-of-the-art baselines in extensive experiments.

View on arXiv
Comments on this paper