49
0

AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models

Abstract

Recent advances in video diffusion models have significantly improved character animation techniques. However, current approaches rely on basic structural conditions such as DWPose or SMPL-X to animate character images, limiting their effectiveness in open-domain scenarios with dynamic backgrounds or challenging human poses. In this paper, we introduce AniCrafter\textbf{AniCrafter}, a diffusion-based human-centric animation model that can seamlessly integrate and animate a given character into open-domain dynamic backgrounds while following given human motion sequences. Built on cutting-edge Image-to-Video (I2V) diffusion architectures, our model incorporates an innovative "avatar-background" conditioning mechanism that reframes open-domain human-centric animation as a restoration task, enabling more stable and versatile animation outputs. Experimental results demonstrate the superior performance of our method. Codes will be available atthis https URL.

View on arXiv
@article{niu2025_2505.20255,
  title={ AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models },
  author={ Muyao Niu and Mingdeng Cao and Yifan Zhan and Qingtian Zhu and Mingze Ma and Jiancheng Zhao and Yanhong Zeng and Zhihang Zhong and Xiao Sun and Yinqiang Zheng },
  journal={arXiv preprint arXiv:2505.20255},
  year={ 2025 }
}
Main:7 Pages
9 Figures
Bibliography:4 Pages
4 Tables
Appendix:1 Pages
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.