ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15586
60
0

How to Train Your Dragon: Automatic Diffusion-Based Rigging for Characters with Diverse Topologies

19 March 2025
Zeqi Gu
Difan Liu
Timothy Langlois
Matthew Fisher
Abe Davis
    DiffM
    3DH
ArXivPDFHTML
Abstract

Recent diffusion-based methods have achieved impressive results on animating images of human subjects. However, most of that success has built on human-specific body pose representations and extensive training with labeled real videos. In this work, we extend the ability of such models to animate images of characters with more diverse skeletal topologies. Given a small number (3-5) of example frames showing the character in different poses with corresponding skeletal information, our model quickly infers a rig for that character that can generate images corresponding to new skeleton poses. We propose a procedural data generation pipeline that efficiently samples training data with diverse topologies on the fly. We use it, along with a novel skeleton representation, to train our model on articulated shapes spanning a large space of textures and topologies. Then during fine-tuning, our model rapidly adapts to unseen target characters and generalizes well to rendering new poses, both for realistic and more stylized cartoon appearances. To better evaluate performance on this novel and challenging task, we create the first 2D video dataset that contains both humanoid and non-humanoid subjects with per-frame keypoint annotations. With extensive experiments, we demonstrate the superior quality of our results. Project page:this https URL

View on arXiv
@article{gu2025_2503.15586,
  title={ How to Train Your Dragon: Automatic Diffusion-Based Rigging for Characters with Diverse Topologies },
  author={ Zeqi Gu and Difan Liu and Timothy Langlois and Matthew Fisher and Abe Davis },
  journal={arXiv preprint arXiv:2503.15586},
  year={ 2025 }
}
Comments on this paper