ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.15897
41
10

ESPnet-Codec: Comprehensive Training and Evaluation of Neural Codecs for Audio, Music, and Speech

24 September 2024
Jiatong Shi
Jinchuan Tian
Yihan Wu
Jee-weon Jung
Jia Qi Yip
Yoshiki Masuyama
William Chen
Yuning Wu
Yuxun Tang
Massa Baali
Dareen Alharhi
Dong Zhang
Ruifan Deng
Tejes Srivastava
Haibin Wu
Alexander H. Liu
Bhiksha Raj
Qin Jin
Ruihua Song
Shinji Watanabe
ArXivPDFHTML
Abstract

Neural codecs have become crucial to recent speech and audio generation research. In addition to signal compression capabilities, discrete codecs have also been found to enhance downstream training efficiency and compatibility with autoregressive language models. However, as extensive downstream applications are investigated, challenges have arisen in ensuring fair comparisons across diverse applications. To address these issues, we present a new open-source platform ESPnet-Codec, which is built on ESPnet and focuses on neural codec training and evaluation. ESPnet-Codec offers various recipes in audio, music, and speech for training and evaluation using several widely adopted codec models. Together with ESPnet-Codec, we present VERSA, a standalone evaluation toolkit, which provides a comprehensive evaluation of codec performance over 20 audio evaluation metrics. Notably, we demonstrate that ESPnet-Codec can be integrated into six ESPnet tasks, supporting diverse applications.

View on arXiv
@article{shi2025_2409.15897,
  title={ ESPnet-Codec: Comprehensive Training and Evaluation of Neural Codecs for Audio, Music, and Speech },
  author={ Jiatong Shi and Jinchuan Tian and Yihan Wu and Jee-weon Jung and Jia Qi Yip and Yoshiki Masuyama and William Chen and Yuning Wu and Yuxun Tang and Massa Baali and Dareen Alharhi and Dong Zhang and Ruifan Deng and Tejes Srivastava and Haibin Wu and Alexander H. Liu and Bhiksha Raj and Qin Jin and Ruihua Song and Shinji Watanabe },
  journal={arXiv preprint arXiv:2409.15897},
  year={ 2025 }
}
Comments on this paper