ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18860
92
8

HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation

24 March 2025
Zunnan Xu
Zhentao Yu
Zixiang Zhou
Jun Zhou
Xiaoyu Jin
Fa-Ting Hong
Xiaozhong Ji
Junwei Zhu
Chengfei Cai
Shiyu Tang
Qin Lin
Xiu Li
Qinglin Lu
    DiffM
    VGen
ArXivPDFHTML
Abstract

We introduce HunyuanPortrait, a diffusion-based condition control method that employs implicit representations for highly controllable and lifelike portrait animation. Given a single portrait image as an appearance reference and video clips as driving templates, HunyuanPortrait can animate the character in the reference image by the facial expression and head pose of the driving videos. In our framework, we utilize pre-trained encoders to achieve the decoupling of portrait motion information and identity in videos. To do so, implicit representation is adopted to encode motion information and is employed as control signals in the animation phase. By leveraging the power of stable video diffusion as the main building block, we carefully design adapter layers to inject control signals into the denoising unet through attention mechanisms. These bring spatial richness of details and temporal consistency. HunyuanPortrait also exhibits strong generalization performance, which can effectively disentangle appearance and motion under different image styles. Our framework outperforms existing methods, demonstrating superior temporal consistency and controllability. Our project is available atthis https URL.

View on arXiv
@article{xu2025_2503.18860,
  title={ HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation },
  author={ Zunnan Xu and Zhentao Yu and Zixiang Zhou and Jun Zhou and Xiaoyu Jin and Fa-Ting Hong and Xiaozhong Ji and Junwei Zhu and Chengfei Cai and Shiyu Tang and Qin Lin and Xiu Li and Qinglin Lu },
  journal={arXiv preprint arXiv:2503.18860},
  year={ 2025 }
}
Comments on this paper