We propose a new dataset for cinematic audio source separation (CASS) that handles non-verbal sounds. Existing CASS datasets only contain reading-style sounds as a speech stem. These datasets differ from actual movie audio, which is more likely to include acted-out voices. Consequently, models trained on conventional datasets tend to have issues where emotionally heightened voices, such as laughter and screams, are more easily separated as an effect, not speech. To address this problem, we build a new dataset, DnR-nonverbal. The proposed dataset includes non-verbal sounds like laughter and screams in the speech stem. From the experiments, we reveal the issue of non-verbal sound extraction by the current CASS model and show that our dataset can effectively address the issue in the synthetic and actual movie audio. Our dataset is available at this https URL.
View on arXiv@article{hasumi2025_2506.02499, title={ DnR-nonverbal: Cinematic Audio Source Separation Dataset Containing Non-Verbal Sounds }, author={ Takuya Hasumi and Yusuke Fujita }, journal={arXiv preprint arXiv:2506.02499}, year={ 2025 } }