Single-Microphone-Based Sound Source Localization for Mobile Robots in Reverberant Environments

Accurately estimating sound source positions is crucial for robot audition. However, existing sound source localization methods typically rely on a microphone array with at least two spatially preconfigured microphones. This requirement hinders the applicability of microphone-based robot audition systems and technologies. To alleviate these challenges, we propose an online sound source localization method that uses a single microphone mounted on a mobile robot in reverberant environments. Specifically, we develop a lightweight neural network model with only 43k parameters to perform real-time distance estimation by extracting temporal information from reverberant signals. The estimated distances are then processed using an extended Kalman filter to achieve online sound source localization. To the best of our knowledge, this is the first work to achieve online sound source localization using a single microphone on a moving robot, a gap that we aim to fill in this work. Extensive experiments demonstrate the effectiveness and merits of our approach. To benefit the broader research community, we have open-sourced our code atthis https URL.
View on arXiv@article{wang2025_2506.16173, title={ Single-Microphone-Based Sound Source Localization for Mobile Robots in Reverberant Environments }, author={ Jiang Wang and Runwu Shi and Benjamin Yen and He Kong and Kazuhiro Nakadai }, journal={arXiv preprint arXiv:2506.16173}, year={ 2025 } }