22
0

IMPACT: Iterative Mask-based Parallel Decoding for Text-to-Audio Generation with Diffusion Modeling

Main:8 Pages
11 Figures
Bibliography:3 Pages
11 Tables
Appendix:7 Pages
Abstract

Text-to-audio generation synthesizes realistic sounds or music given a natural language prompt. Diffusion-based frameworks, including the Tango and the AudioLDM series, represent the state-of-the-art in text-to-audio generation. Despite achieving high audio fidelity, they incur significant inference latency due to the slow diffusion sampling process. MAGNET, a mask-based model operating on discrete tokens, addresses slow inference through iterative mask-based parallel decoding. However, its audio quality still lags behind that of diffusion-based models. In this work, we introduce IMPACT, a text-to-audio generation framework that achieves high performance in audio quality and fidelity while ensuring fast inference. IMPACT utilizes iterative mask-based parallel decoding in a continuous latent space powered by diffusion modeling. This approach eliminates the fidelity constraints of discrete tokens while maintaining competitive inference speed. Results on AudioCaps demonstrate that IMPACT achieves state-of-the-art performance on key metrics including Fréchet Distance (FD) and Fréchet Audio Distance (FAD) while significantly reducing latency compared to prior models. The project website is available atthis https URL.

View on arXiv
@article{huang2025_2506.00736,
  title={ IMPACT: Iterative Mask-based Parallel Decoding for Text-to-Audio Generation with Diffusion Modeling },
  author={ Kuan-Po Huang and Shu-wen Yang and Huy Phan and Bo-Ru Lu and Byeonggeun Kim and Sashank Macha and Qingming Tang and Shalini Ghosh and Hung-yi Lee and Chieh-Chi Kao and Chao Wang },
  journal={arXiv preprint arXiv:2506.00736},
  year={ 2025 }
}
Comments on this paper