ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00084
50
0

InspireMusic: Integrating Super Resolution and Large Language Model for High-Fidelity Long-Form Music Generation

28 February 2025
C. Zhang
Yukun Ma
Qian Chen
Wen Wang
Shengkui Zhao
Zexu Pan
Hao Wang
Chongjia Ni
Trung Hieu Nguyen
Kun Zhou
Y. Jiang
Chaohong Tan
Zhifu Gao
Zhihao Du
B. Ma
ArXivPDFHTML
Abstract

We introduce InspireMusic, a framework integrated super resolution and large language model for high-fidelity long-form music generation. A unified framework generates high-fidelity music, songs, and audio, which incorporates an autoregressive transformer with a super-resolution flow-matching model. This framework enables the controllable generation of high-fidelity long-form music at a higher sampling rate from both text and audio prompts. Our model differs from previous approaches, as we utilize an audio tokenizer with one codebook that contains richer semantic information, thereby reducing training costs and enhancing efficiency. This combination enables us to achieve high-quality audio generation with long-form coherence of up to 888 minutes. Then, an autoregressive transformer model based on Qwen 2.5 predicts audio tokens. Next, we employ a super-resolution flow-matching model to generate high-sampling rate audio with fine-grained details learned from an acoustic codec model. Comprehensive experiments show that the InspireMusic-1.5B-Long model has a comparable performance to recent top-tier open-source systems, including MusicGen and Stable Audio 2.0, on subjective and objective evaluations. The code and pre-trained models are released atthis https URL.

View on arXiv
@article{zhang2025_2503.00084,
  title={ InspireMusic: Integrating Super Resolution and Large Language Model for High-Fidelity Long-Form Music Generation },
  author={ Chong Zhang and Yukun Ma and Qian Chen and Wen Wang and Shengkui Zhao and Zexu Pan and Hao Wang and Chongjia Ni and Trung Hieu Nguyen and Kun Zhou and Yidi Jiang and Chaohong Tan and Zhifu Gao and Zhihao Du and Bin Ma },
  journal={arXiv preprint arXiv:2503.00084},
  year={ 2025 }
}
Comments on this paper