Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model

Recent advances in conditional diffusion models have shown promise for generating realistic TalkingFace videos, yet challenges persist in achieving consistent head movement, synchronized facial expressions, and accurate lip synchronization over extended generations. To address these, we introduce the \textbf{M}otion-priors \textbf{C}onditional \textbf{D}iffusion \textbf{M}odel (\textbf{MCDM}), which utilizes both archived and current clip motion priors to enhance motion prediction and ensure temporal consistency. The model consists of three key elements: (1) an archived-clip motion-prior that incorporates historical frames and a reference frame to preserve identity and context; (2) a present-clip motion-prior diffusion model that captures multimodal causality for accurate predictions of head movements, lip sync, and expressions; and (3) a memory-efficient temporal attention mechanism that mitigates error accumulation by dynamically storing and updating motion features. We also release the \textbf{TalkingFace-Wild} dataset, a multilingual collection of over 200 hours of footage across 10 languages. Experimental results demonstrate the effectiveness of MCDM in maintaining identity and motion continuity for long-term TalkingFace generation. Code, models, and datasets will be publicly available.
View on arXiv@article{shen2025_2502.09533, title={ Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model }, author={ Fei Shen and Cong Wang and Junyao Gao and Qin Guo and Jisheng Dang and Jinhui Tang and Tat-Seng Chua }, journal={arXiv preprint arXiv:2502.09533}, year={ 2025 } }