ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06499
46
0

ExGes: Expressive Human Motion Retrieval and Modulation for Audio-Driven Gesture Synthesis

9 March 2025
Xukun Zhou
Fengxin Li
Ming Chen
Yan Zhou
Pengfei Wan
Di Zhang
Yeying Jin
Zhaoxin Fan
Hongyan Liu
Jun He
    DiffM
    VGen
ArXivPDFHTML
Abstract

Audio-driven human gesture synthesis is a crucial task with broad applications in virtual avatars, human-computer interaction, and creative content generation. Despite notable progress, existing methods often produce gestures that are coarse, lack expressiveness, and fail to fully align with audio semantics. To address these challenges, we propose ExGes, a novel retrieval-enhanced diffusion framework with three key designs: (1) a Motion Base Construction, which builds a gesture library using training dataset; (2) a Motion Retrieval Module, employing constrative learning and momentum distillation for fine-grained reference poses retreiving; and (3) a Precision Control Module, integrating partial masking and stochastic masking to enable flexible and fine-grained control. Experimental evaluations on BEAT2 demonstrate that ExGes reduces Fréchet Gesture Distance by 6.2\% and improves motion diversity by 5.3\% over EMAGE, with user studies revealing a 71.3\% preference for its naturalness and semantic relevance. Code will be released upon acceptance.

View on arXiv
@article{zhou2025_2503.06499,
  title={ ExGes: Expressive Human Motion Retrieval and Modulation for Audio-Driven Gesture Synthesis },
  author={ Xukun Zhou and Fengxin Li and Ming Chen and Yan Zhou and Pengfei Wan and Di Zhang and Yeying Jin and Zhaoxin Fan and Hongyan Liu and Jun He },
  journal={arXiv preprint arXiv:2503.06499},
  year={ 2025 }
}
Comments on this paper