ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.05512
42
1

IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System

8 February 2025
Wei Deng
Siyi Zhou
Jingchen Shu
Jinchao Wang
Lu Wang
    VLM
ArXivPDFHTML
Abstract

Recently, large language model (LLM) based text-to-speech (TTS) systems have gradually become the mainstream in the industry due to their high naturalness and powerful zero-shot voice cloningthis http URL, we introduce the IndexTTS system, which is mainly based on the XTTS and Tortoise model. We add some novel improvements. Specifically, in Chinese scenarios, we adopt a hybrid modeling method that combines characters and pinyin, making the pronunciations of polyphonic characters and long-tail characters controllable. We also performed a comparative analysis of the Vector Quantization (VQ) with Finite-Scalar Quantization (FSQ) for codebook utilization of acoustic speech tokens. To further enhance the effect and stability of voice cloning, we introduce a conformer-based speech conditional encoder and replace the speechcode decoder with BigVGAN2. Compared with XTTS, it has achieved significant improvements in naturalness, content consistency, and zero-shot voice cloning. As for the popular TTS systems in the open-source, such as Fish-Speech, CosyVoice2, FireRedTTS and F5-TTS, IndexTTS has a relatively simple training process, more controllable usage, and faster inference speed. Moreover, its performance surpasses that of these systems. Our demos are available atthis https URL.

View on arXiv
@article{deng2025_2502.05512,
  title={ IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System },
  author={ Wei Deng and Siyi Zhou and Jingchen Shu and Jinchao Wang and Lu Wang },
  journal={arXiv preprint arXiv:2502.05512},
  year={ 2025 }
}
Comments on this paper