ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21566
12
0

Diffusion Model-based Activity Completion for AI Motion Capture from Videos

27 May 2025
Gao Huayu
Huang Tengjiu
Ye Xiaolong
Tsuyoshi Okita
    DiffMVGen
ArXiv (abs)PDFHTML
Main:30 Pages
17 Figures
Bibliography:1 Pages
5 Tables
Appendix:1 Pages
Abstract

AI-based motion capture is an emerging technology that offers a cost-effective alternative to traditional motion capture systems. However, current AI motion capture methods rely entirely on observed video sequences, similar to conventional motion capture. This means that all human actions must be predefined, and movements outside the observed sequences are not possible. To address this limitation, we aim to apply AI motion capture to virtual humans, where flexible actions beyond the observed sequences are required. We assume that while many action fragments exist in the training data, the transitions between them may be missing. To bridge these gaps, we propose a diffusion-model-based action completion technique that generates complementary human motion sequences, ensuring smooth and continuous movements. By introducing a gate module and a position-time embedding module, our approach achieves competitive results on the Human3.6M dataset. Our experimental results show that (1) MDC-Net outperforms existing methods in ADE, FDE, and MMADE but is slightly less accurate in MMFDE, (2) MDC-Net has a smaller model size (16.84M) compared to HumanMAC (28.40M), and (3) MDC-Net generates more natural and coherent motion sequences. Additionally, we propose a method for extracting sensor data, including acceleration and angular velocity, from human motion sequences.

View on arXiv
@article{huayu2025_2505.21566,
  title={ Diffusion Model-based Activity Completion for AI Motion Capture from Videos },
  author={ Gao Huayu and Huang Tengjiu and Ye Xiaolong and Tsuyoshi Okita },
  journal={arXiv preprint arXiv:2505.21566},
  year={ 2025 }
}
Comments on this paper