ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08797
30
0

HunyuanVideo-HOMA: Generic Human-Object Interaction in Multimodal Driven Human Animation

10 June 2025
Ziyao Huang
Zixiang Zhou
Juan Cao
Yifeng Ma
Yi Chen
Zejing Rao
Zhiyong Xu
Hongmei Wang
Qin Lin
Yuan Zhou
Qinglin Lu
Fan Tang
    VGen
ArXiv (abs)PDFHTML
Abstract

To address key limitations in human-object interaction (HOI) video generation -- specifically the reliance on curated motion data, limited generalization to novel objects/scenarios, and restricted accessibility -- we introduce HunyuanVideo-HOMA, a weakly conditioned multimodal-driven framework. HunyuanVideo-HOMA enhances controllability and reduces dependency on precise inputs through sparse, decoupled motion guidance. It encodes appearance and motion signals into the dual input space of a multimodal diffusion transformer (MMDiT), fusing them within a shared context space to synthesize temporally consistent and physically plausible interactions. To optimize training, we integrate a parameter-space HOI adapter initialized from pretrained MMDiT weights, preserving prior knowledge while enabling efficient adaptation, and a facial cross-attention adapter for anatomically accurate audio-driven lip synchronization. Extensive experiments confirm state-of-the-art performance in interaction naturalness and generalization under weak supervision. Finally, HunyuanVideo-HOMA demonstrates versatility in text-conditioned generation and interactive object manipulation, supported by a user-friendly demo interface. The project page is atthis https URL.

View on arXiv
@article{huang2025_2506.08797,
  title={ HunyuanVideo-HOMA: Generic Human-Object Interaction in Multimodal Driven Human Animation },
  author={ Ziyao Huang and Zixiang Zhou and Juan Cao and Yifeng Ma and Yi Chen and Zejing Rao and Zhiyong Xu and Hongmei Wang and Qin Lin and Yuan Zhou and Qinglin Lu and Fan Tang },
  journal={arXiv preprint arXiv:2506.08797},
  year={ 2025 }
}
Comments on this paper