BGM2Pose: Active 3D Human Pose Estimation with Non-Stationary Sounds
We propose BGM2Pose, a non-invasive 3D human pose estimation method using arbitrary music (e.g., background music) as active sensing signals. Unlike existing approaches that significantly limit practicality by employing intrusive chirp signals within the audible range, our method utilizes natural music that causes minimal discomfort to humans. Estimating human poses from standard music presents significant challenges. In contrast to sound sources specifically designed for measurement, regular music varies in both volume and pitch. These dynamic changes in signals caused by music are inevitably mixed with alterations in the sound field resulting from human motion, making it hard to extract reliable cues for pose estimation. To address these challenges, BGM2Pose introduces a Contrastive Pose Extraction Module that employs contrastive learning and hard negative sampling to eliminate musical components from the recorded data, isolating the pose information. Additionally, we propose a Frequency-wise Attention Module that enables the model to focus on subtle acoustic variations attributable to human movement by dynamically computing attention across frequency bands. Experiments suggest that our method outperforms the existing methods, demonstrating substantial potential for real-world applications. Our datasets and code will be made publicly available.
View on arXiv@article{shibata2025_2503.00389, title={ BGM2Pose: Active 3D Human Pose Estimation with Non-Stationary Sounds }, author={ Yuto Shibata and Yusuke Oumi and Go Irie and Akisato Kimura and Yoshimitsu Aoki and Mariko Isogawa }, journal={arXiv preprint arXiv:2503.00389}, year={ 2025 } }