ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02499
56
0
v1v2 (latest)

DnR-nonverbal: Cinematic Audio Source Separation Dataset Containing Non-Verbal Sounds

3 June 2025
Takuya Hasumi
Yusuke Fujita
ArXiv (abs)PDFHTML
Main:4 Pages
4 Figures
Bibliography:1 Pages
4 Tables
Abstract

We propose a new dataset for cinematic audio source separation (CASS) that handles non-verbal sounds. Existing CASS datasets only contain reading-style sounds as a speech stem. These datasets differ from actual movie audio, which is more likely to include acted-out voices. Consequently, models trained on conventional datasets tend to have issues where emotionally heightened voices, such as laughter and screams, are more easily separated as an effect, not speech. To address this problem, we build a new dataset, DnR-nonverbal. The proposed dataset includes non-verbal sounds like laughter and screams in the speech stem. From the experiments, we reveal the issue of non-verbal sound extraction by the current CASS model and show that our dataset can effectively address the issue in the synthetic and actual movie audio. Our dataset is available at this https URL.

View on arXiv
@article{hasumi2025_2506.02499,
  title={ DnR-nonverbal: Cinematic Audio Source Separation Dataset Containing Non-Verbal Sounds },
  author={ Takuya Hasumi and Yusuke Fujita },
  journal={arXiv preprint arXiv:2506.02499},
  year={ 2025 }
}
Comments on this paper