AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models
- DiffMVGen
Recent advances in video diffusion models have significantly improved character animation techniques. However, current approaches rely on basic structural conditions such as DWPose or SMPL-X to animate character images, limiting their effectiveness in open-domain scenarios with dynamic backgrounds or challenging human poses. In this paper, we introduce , a diffusion-based human-centric animation model that can seamlessly integrate and animate a given character into open-domain dynamic backgrounds while following given human motion sequences. Built on cutting-edge Image-to-Video (I2V) diffusion architectures, our model incorporates an innovative "avatar-background" conditioning mechanism that reframes open-domain human-centric animation as a restoration task, enabling more stable and versatile animation outputs. Experimental results demonstrate the superior performance of our method. Codes will be available atthis https URL.
View on arXiv@article{niu2025_2505.20255, title={ AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models }, author={ Muyao Niu and Mingdeng Cao and Yifan Zhan and Qingtian Zhu and Mingze Ma and Jiancheng Zhao and Yanhong Zeng and Zhihang Zhong and Xiao Sun and Yinqiang Zheng }, journal={arXiv preprint arXiv:2505.20255}, year={ 2025 } }