ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08293
19
0

M3G: Multi-Granular Gesture Generator for Audio-Driven Full-Body Human Motion Synthesis

13 May 2025
Zhizhuo Yin
Yuk Hang Tsui
Pan Hui
    SLR
    VGen
ArXivPDFHTML
Abstract

Generating full-body human gestures encompassing face, body, hands, and global movements from audio is a valuable yet challenging task in virtual avatar creation. Previous systems focused on tokenizing the human gestures framewisely and predicting the tokens of each frame from the input audio. However, one observation is that the number of frames required for a complete expressive human gesture, defined as granularity, varies among different human gesture patterns. Existing systems fail to model these gesture patterns due to the fixed granularity of their gesture tokens. To solve this problem, we propose a novel framework named Multi-Granular Gesture Generator (M3G) for audio-driven holistic gesture generation. In M3G, we propose a novel Multi-Granular VQ-VAE (MGVQ-VAE) to tokenize motion patterns and reconstruct motion sequences from different temporal granularities. Subsequently, we proposed a multi-granular token predictor that extracts multi-granular information from audio and predicts the corresponding motion tokens. Then M3G reconstructs the human gestures from the predicted tokens using the MGVQ-VAE. Both objective and subjective experiments demonstrate that our proposed M3G framework outperforms the state-of-the-art methods in terms of generating natural and expressive full-body human gestures.

View on arXiv
@article{yin2025_2505.08293,
  title={ M3G: Multi-Granular Gesture Generator for Audio-Driven Full-Body Human Motion Synthesis },
  author={ Zhizhuo Yin and Yuk Hang Tsui and Pan Hui },
  journal={arXiv preprint arXiv:2505.08293},
  year={ 2025 }
}
Comments on this paper