ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10810
18
0

MoCLIP: Motion-Aware Fine-Tuning and Distillation of CLIP for Human Motion Generation

16 May 2025
Gabriel Maldonado
Armin Danesh Pazho
Ghazal Alinezhad Noghre
Vinit Katariya
Hamed Tabkhi
    CLIP
    VGen
ArXivPDFHTML
Abstract

Human motion generation is essential for fields such as animation, robotics, and virtual reality, requiring models that effectively capture motion dynamics from text descriptions. Existing approaches often rely on Contrastive Language-Image Pretraining (CLIP)-based text encoders, but their training on text-image pairs constrains their ability to understand temporal and kinematic structures inherent in motion and motion generation. This work introduces MoCLIP, a fine-tuned CLIP model with an additional motion encoding head, trained on motion sequences using contrastive learning and tethering loss. By explicitly incorporating motion-aware representations, MoCLIP enhances motion fidelity while remaining compatible with existing CLIP-based pipelines and seamlessly integrating into various CLIP-based methods. Experiments demonstrate that MoCLIP improves Top-1, Top-2, and Top-3 accuracy while maintaining competitive FID, leading to improved text-to-motion alignment results. These results highlight MoCLIP's versatility and effectiveness, establishing it as a robust framework for enhancing motion generation.

View on arXiv
@article{maldonado2025_2505.10810,
  title={ MoCLIP: Motion-Aware Fine-Tuning and Distillation of CLIP for Human Motion Generation },
  author={ Gabriel Maldonado and Armin Danesh Pazho and Ghazal Alinezhad Noghre and Vinit Katariya and Hamed Tabkhi },
  journal={arXiv preprint arXiv:2505.10810},
  year={ 2025 }
}
Comments on this paper