25
0

CLAP-ART: Automated Audio Captioning with Semantic-rich Audio Representation Tokenizer

Main:4 Pages
2 Figures
Bibliography:1 Pages
4 Tables
Abstract

Automated Audio Captioning (AAC) aims to describe the semantic contexts of general sounds, including acoustic events and scenes, by leveraging effective acoustic features. To enhance performance, an AAC method, EnCLAP, employed discrete tokens from EnCodec as an effective input for fine-tuning a language model BART. However, EnCodec is designed to reconstruct waveforms rather than capture the semantic contexts of general sounds, which AAC should describe. To address this issue, we propose CLAP-ART, an AAC method that utilizes ``semantic-rich and discrete'' tokens as input. CLAP-ART computes semantic-rich discrete tokens from pre-trained audio representations through vector quantization. We experimentally confirmed that CLAP-ART outperforms baseline EnCLAP on two AAC benchmarks, indicating that semantic-rich discrete tokens derived from semantically rich AR are beneficial for AAC.

View on arXiv
@article{takeuchi2025_2506.00800,
  title={ CLAP-ART: Automated Audio Captioning with Semantic-rich Audio Representation Tokenizer },
  author={ Daiki Takeuchi and Binh Thien Nguyen and Masahiro Yasuda and Yasunori Ohishi and Daisuke Niizumi and Noboru Harada },
  journal={arXiv preprint arXiv:2506.00800},
  year={ 2025 }
}
Comments on this paper