110

Kimodo: Scaling Controllable Human Motion Generation

Davis Rempe
Mathis Petrovich
Ye Yuan
Haotian Zhang
Xue Bin Peng
Yifeng Jiang
Tingwu Wang
Umar Iqbal
David Minor
Michael de Ruyter
Jiefeng Li
Chen Tessler
Edy Lim
Eugene Jeong
Sam Wu
Ehsan Hassani
Michael Huang
Jin-Bey Yu
Chaeyeon Chung
Lina Song
Olivier Dionne
Jan Kautz
Simon Yuen
Sanja Fidler
Main:16 Pages
10 Figures
Bibliography:4 Pages
2 Tables
Appendix:1 Pages
Abstract

High-quality human motion data is becoming increasingly important for applications in robotics, simulation, and entertainment. Recent generative models offer a potential data source, enabling human motion synthesis through intuitive inputs like text prompts or kinematic constraints on poses. However, the small scale of public mocap datasets has limited the motion quality, control accuracy, and generalization of these models. In this work, we introduce Kimodo, an expressive and controllable kinematic motion diffusion model trained on 700 hours of optical motion capture data. Our model generates high-quality motions while being easily controlled through text and a comprehensive suite of kinematic constraints including full-body keyframes, sparse joint positions/rotations, 2D waypoints, and dense 2D paths. This is enabled through a carefully designed motion representation and two-stage denoiser architecture that decomposes root and body prediction to minimize motion artifacts while allowing for flexible constraint conditioning. Experiments on the large-scale mocap dataset justify key design decisions and analyze how the scaling of dataset size and model size affect performance.

View on arXiv
Comments on this paper