8
0

Toward Rich Video Human-Motion2D Generation

Main:9 Pages
7 Figures
Bibliography:3 Pages
4 Tables
Abstract

Generating realistic and controllable human motions, particularly those involving rich multi-character interactions, remains a significant challenge due to data scarcity and the complexities of modeling inter-personal dynamics. To address these limitations, we first introduce a new large-scale rich video human motion 2D dataset (Motion2D-Video-150K) comprising 150,000 video sequences. Motion2D-Video-150K features a balanced distribution of diverse single-character and, crucially, double-character interactive actions, each paired with detailed textual descriptions. Building upon this dataset, we propose a novel diffusion-based rich video human motion2D generation (RVHM2D) model. RVHM2D incorporates an enhanced textual conditioning mechanism utilizing either dual text encoders (CLIP-L/B) or T5-XXL with both global and local features. We devise a two-stage training strategy: the model is first trained with a standard diffusion objective, and then fine-tuned using reinforcement learning with an FID-based reward to further enhance motion realism and text alignment. Extensive experiments demonstrate that RVHM2D achieves leading performance on the Motion2D-Video-150K benchmark in generating both single and interactive double-character scenarios.

View on arXiv
@article{xi2025_2506.14428,
  title={ Toward Rich Video Human-Motion2D Generation },
  author={ Ruihao Xi and Xuekuan Wang and Yongcheng Li and Shuhua Li and Zichen Wang and Yiwei Wang and Feng Wei and Cairong Zhao },
  journal={arXiv preprint arXiv:2506.14428},
  year={ 2025 }
}
Comments on this paper