Unleashing the Power of Natural Audio Featuring Multiple Sound Sources

Universal sound separation aims to extract clean audio tracks corresponding to distinct events from mixed audio, which is critical for artificial auditory perception. However, current methods heavily rely on artificially mixed audio for training, which limits their ability to generalize to naturally mixed audio collected in real-world environments. To overcome this limitation, we propose ClearSep, an innovative framework that employs a data engine to decompose complex naturally mixed audio into multiple independent tracks, thereby allowing effective sound separation in real-world scenarios. We introduce two remix-based evaluation metrics to quantitatively assess separation quality and use these metrics as thresholds to iteratively apply the data engine alongside model training, progressively optimizing separation performance. In addition, we propose a series of training strategies tailored to these separated independent tracks to make the best use of them. Extensive experiments demonstrate that ClearSep achieves state-of-the-art performance across multiple sound separation tasks, highlighting its potential for advancing sound separation in natural audio scenarios. For more examples and detailed results, please visit our demo page atthis https URL.
View on arXiv@article{cheng2025_2504.17782, title={ Unleashing the Power of Natural Audio Featuring Multiple Sound Sources }, author={ Xize Cheng and Slytherin Wang and Zehan Wang and Rongjie Huang and Tao Jin and Zhou Zhao }, journal={arXiv preprint arXiv:2504.17782}, year={ 2025 } }