This paper addresses a major challenge in acoustic event detection, in particular infant cry detection in the presence of other sounds and background noises: the lack of precise annotated data. We present two contributions for supervised and unsupervised infant cry detection. The first is an annotated dataset for cry segmentation, which enables supervised models to achieve state-of-the-art performance. Additionally, we propose a novel unsupervised method, Causal Representation Spare Transition Clustering (CRSTC), based on causal temporal representation, which helps address the issue of data scarcity more generally. By integrating the detected cry segments, we significantly improve the performance of downstream infant cry classification, highlighting the potential of this approach for infant care applications.
View on arXiv@article{fu2025_2503.06247, title={ Infant Cry Detection Using Causal Temporal Representation }, author={ Minghao Fu and Danning Li and Aryan Gadhiya and Benjamin Lambright and Mohamed Alowais and Mohab Bahnassy and Saad El Dine Elletter and Hawau Olamide Toyin and Haiyan Jiang and Kun Zhang and Hanan Aldarmaki }, journal={arXiv preprint arXiv:2503.06247}, year={ 2025 } }