ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.03883
105
15

Human Motion Video Generation: A Survey

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
4 September 2025
Haiwei Xue
Xiangyang Luo
Zhanghao Hu
Xin Zhang
Xunzhi Xiang
Yuqin Dai
Jianzhuang Liu
Zhensong Zhang
Minglei Li
Zhiqiang Wang
Fei Ma
Zhiyong Wu
Changpeng Yang
Zonghong Dai
Fei Richard Yu
    EGVMVGen
ArXiv (abs)PDFHTMLGithub (3434★)
Main:14 Pages
15 Figures
Bibliography:5 Pages
1 Tables
Appendix:1 Pages
Abstract

Human motion video generation has garnered significant research interest due to its broad applications, enabling innovations such as photorealistic singing heads or dynamic avatars that seamlessly dance to music. However, existing surveys in this field focus on individual methods, lacking a comprehensive overview of the entire generative process. This paper addresses this gap by providing an in-depth survey of human motion video generation, encompassing over ten sub-tasks, and detailing the five key phases of the generation process: input, motion planning, motion video generation, refinement, and output. Notably, this is the first survey that discusses the potential of large language models in enhancing human motion video generation. Our survey reviews the latest developments and technological trends in human motion video generation across three primary modalities: vision, text, and audio. By covering over two hundred papers, we offer a thorough overview of the field and highlight milestone works that have driven significant technological breakthroughs. Our goal for this survey is to unveil the prospects of human motion video generation and serve as a valuable resource for advancing the comprehensive applications of digital humans. A complete list of the models examined in this survey is available in Our Repositorythis https URL.

View on arXiv
Comments on this paper