Adaptive Accompaniment with ReaLchords

Jamming requires coordination, anticipation, and collaborative creativity between musicians. Current generative models of music produce expressive output but are not able to generate in an \emph{online} manner, meaning simultaneously with other musicians (human or otherwise). We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody. We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use. The finetuning objective leverages both a novel reward model that provides feedback on both harmonic and temporal coherency between melody and chord, and a divergence term that implements a novel type of distillation from a teacher model that can see the future melody. Through quantitative experiments and listening tests, we demonstrate that the resulting model adapts well to unfamiliar input and produce fitting accompaniment. ReaLchords opens the door to live jamming, as well as simultaneous co-creation in other modalities.
View on arXiv@article{wu2025_2506.14723, title={ Adaptive Accompaniment with ReaLchords }, author={ Yusong Wu and Tim Cooijmans and Kyle Kastner and Adam Roberts and Ian Simon and Alexander Scarlatos and Chris Donahue and Cassie Tarakajian and Shayegan Omidshafiei and Aaron Courville and Pablo Samuel Castro and Natasha Jaques and Cheng-Zhi Anna Huang }, journal={arXiv preprint arXiv:2506.14723}, year={ 2025 } }