37
0

Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models

Main:4 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

In this work, we introduce the task of singing voice deepfake source attribution (SVDSA). We hypothesize that multimodal foundation models (MMFMs) such as ImageBind, LanguageBind will be most effective for SVDSA as they are better equipped for capturing subtle source-specific characteristics-such as unique timbre, pitch manipulation, or synthesis artifacts of each singing voice deepfake source due to their cross-modality pre-training. Our experiments with MMFMs, speech foundation models and music foundation models verify the hypothesis that MMFMs are the most effective for SVDSA. Furthermore, inspired from related research, we also explore fusion of foundation models (FMs) for improved SVDSA. To this end, we propose a novel framework, COFFE which employs Chernoff Distance as novel loss function for effective fusion of FMs. Through COFFE with the symphony of MMFMs, we attain the topmost performance in comparison to all the individual FMs and baseline fusion methods.

View on arXiv
@article{phukan2025_2506.03364,
  title={ Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models },
  author={ Orchid Chetia Phukan and Girish and Mohd Mujtaba Akhtar and Swarup Ranjan Behera and Priyabrata Mallick and Pailla Balakrishna Reddy and Arun Balaji Buduru and Rajesh Sharma },
  journal={arXiv preprint arXiv:2506.03364},
  year={ 2025 }
}
Comments on this paper