10
0

Refining music sample identification with a self-supervised graph neural network

Main:4 Pages
1 Figures
4 Tables
Appendix:3 Pages
Abstract

Automatic sample identification (ASID), the detection and identification of portions of audio recordings that have been reused in new musical works, is an essential but challenging task in the field of audio query-based retrieval. While a related task, audio fingerprinting, has made significant progress in accurately retrieving musical content under "real world" (noisy, reverberant) conditions, ASID systems struggle to identify samples that have undergone musical modifications. Thus, a system robust to common music production transformations such as time-stretching, pitch-shifting, effects processing, and underlying or overlaying music is an important open challenge.In this work, we propose a lightweight and scalable encoding architecture employing a Graph Neural Network within a contrastive learning framework. Our model uses only 9% of the trainable parameters compared to the current state-of-the-art system while achieving comparable performance, reaching a mean average precision (mAP) of 44.2%.To enhance retrieval quality, we introduce a two-stage approach consisting of an initial coarse similarity search for candidate selection, followed by a cross-attention classifier that rejects irrelevant matches and refines the ranking of retrieved candidates - an essential capability absent in prior models. In addition, because queries in real-world applications are often short in duration, we benchmark our system for short queries using new fine-grained annotations for the Sample100 dataset, which we publish as part of this work.

View on arXiv
@article{bhattacharjee2025_2506.14684,
  title={ Refining music sample identification with a self-supervised graph neural network },
  author={ Aditya Bhattacharjee and Ivan Meresman Higgs and Mark Sandler and Emmanouil Benetos },
  journal={arXiv preprint arXiv:2506.14684},
  year={ 2025 }
}
Comments on this paper