ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20857
42
0

G-DReaM: Graph-conditioned Diffusion Retargeting across Multiple Embodiments

27 May 2025
Zhefeng Cao
Ben Liu
S. Li
Wei Zhang
Hua Chen
    DiffM
ArXivPDFHTML
Abstract

Motion retargeting for specific robot from existing motion datasets is one critical step in transferring motion patterns from human behaviors to and across various robots. However, inconsistencies in topological structure, geometrical parameters as well as joint correspondence make it difficult to handle diverse embodiments with a unified retargeting architecture. In this work, we propose a novel unified graph-conditioned diffusion-based motion generation framework for retargeting reference motions across diverse embodiments. The intrinsic characteristics of heterogeneous embodiments are represented with graph structure that effectively captures topological and geometrical features of different robots. Such a graph-based encoding further allows for knowledge exploitation at the joint level with a customized attention mechanisms developed in this work. For lacking ground truth motions of the desired embodiment, we utilize an energy-based guidance formulated as retargeting losses to train the diffusion model. As one of the first cross-embodiment motion retargeting methods in robotics, our experiments validate that the proposed model can retarget motions across heterogeneous embodiments in a unified manner. Moreover, it demonstrates a certain degree of generalization to both diverse skeletal structures and similar motion patterns.

View on arXiv
@article{cao2025_2505.20857,
  title={ G-DReaM: Graph-conditioned Diffusion Retargeting across Multiple Embodiments },
  author={ Zhefeng Cao and Ben Liu and Sen Li and Wei Zhang and Hua Chen },
  journal={arXiv preprint arXiv:2505.20857},
  year={ 2025 }
}
Comments on this paper