We present a keyframe-based framework for generating music-synchronized, choreography aware animal dance videos. Starting from a few keyframes representing distinct animal poses -- generated via text-to-image prompting or GPT-4o -- we formulate dance synthesis as a graph optimization problem: find the optimal keyframe structure that satisfies a specified choreography pattern of beats, which can be automatically estimated from a reference dance video. We also introduce an approach for mirrored pose image generation, essential for capturing symmetry in dance. In-between frames are synthesized using an video diffusion model. With as few as six input keyframes, our method can produce up to 30 second dance videos across a wide range of animals and music tracks.
View on arXiv@article{wang2025_2505.23738, title={ How Animals Dance (When You're Not Looking) }, author={ Xiaojuan Wang and Aleksander Holynski and Brian Curless and Ira Kemelmacher and Steve Seitz }, journal={arXiv preprint arXiv:2505.23738}, year={ 2025 } }