Versatile Symbolic Music-for-Music Modeling via Function Alignment

Many music AI models learn a map between music content and human-defined labels. However, many annotations, such as chords, can be naturally expressed within the music modality itself, e.g., as sequences of symbolic notes. This observation enables both understanding tasks (e.g., chord recognition) and conditional generation tasks (e.g., chord-conditioned melody generation) to be unified under a music-for-music sequence modeling paradigm. In this work, we propose parameter-efficient solutions for a variety of symbolic music-for-music tasks. The high-level idea is that (1) we utilize a pretrained Language Model (LM) for both the reference and the target sequence and (2) we link these two LMs via a lightweight adapter. Experiments show that our method achieves superior performance among different tasks such as chord recognition, melody generation, and drum track generation. All demos, code and model weights are publicly available.
View on arXiv@article{jiang2025_2506.15548, title={ Versatile Symbolic Music-for-Music Modeling via Function Alignment }, author={ Junyan Jiang and Daniel Chin and Liwei Lin and Xuanjie Liu and Gus Xia }, journal={arXiv preprint arXiv:2506.15548}, year={ 2025 } }