Multi-modal music generation, using multiple modalities like text, images, and video alongside musical scores and audio as guidance, is an emerging research area with broad applications. This paper reviews this field, categorizing music generation systems from the perspective of modalities. The review covers modality representation, multi-modal data alignment, and their utilization to guide music generation. Current datasets and evaluation methods are also discussed. Key challenges in this area include effective multi-modal integration, large-scale comprehensive datasets, and systematic evaluation methods. Finally, an outlook on future research directions is provided, focusing on creativity, efficiency, multi-modal alignment, and evaluation.
View on arXiv@article{li2025_2504.00837, title={ A Survey on Music Generation from Single-Modal, Cross-Modal, and Multi-Modal Perspectives }, author={ Shuyu Li and Shulei Ji and Zihao Wang and Songruoyao Wu and Jiaxing Yu and Kejun Zhang }, journal={arXiv preprint arXiv:2504.00837}, year={ 2025 } }