Learning Musical Representations for Music Performance Question Answering
Music performances are representative scenarios for audio-visual modeling. Unlike common scenarios with sparse audio, music performances continuously involve dense audio signals throughout. While existing multimodal learning methods on the audio-video QA demonstrate impressive capabilities in general scenarios, they are incapable of dealing with fundamental problems within the music performances: they underexplore the interaction between the multimodal signals in performance and fail to consider the distinctive characteristics of instruments and music. Therefore, existing methods tend to answer questions regarding musical performances inaccurately. To bridge the above research gaps, (i) given the intricate multimodal interconnectivity inherent to music data, our primary backbone is designed to incorporate multimodal interactions within the context of music; (ii) to enable the model to learn music characteristics, we annotate and release rhythmic and music sources in the current music datasets; (iii) for time-aware audio-visual modeling, we align the model's music predictions with the temporal dimension. Our experiments show state-of-the-art effects on the Music AVQA datasets. Our code is available atthis https URL.
View on arXiv@article{diao2025_2502.06710, title={ Learning Musical Representations for Music Performance Question Answering }, author={ Xingjian Diao and Chunhui Zhang and Tingxuan Wu and Ming Cheng and Zhongyu Ouyang and Weiyi Wu and Jiang Gui }, journal={arXiv preprint arXiv:2502.06710}, year={ 2025 } }