ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19924
39
0

DiffCSS: Diverse and Expressive Conversational Speech Synthesis with Diffusion Models

27 February 2025
Weihao Wu
Zhiwei Lin
Yixuan Zhou
Jingbei Li
Rui Niu
Qinghua Wu
Songjun Cao
Long Ma
Zhiyong Wu
    DiffM
ArXivPDFHTML
Abstract

Conversational speech synthesis (CSS) aims to synthesize both contextually appropriate and expressive speech, and considerable efforts have been made to enhance the understanding of conversational context. However, existing CSS systems are limited to deterministic prediction, overlooking the diversity of potential responses. Moreover, they rarely employ language model (LM)-based TTS backbones, limiting the naturalness and quality of synthesized speech. To address these issues, in this paper, we propose DiffCSS, an innovative CSS framework that leverages diffusion models and an LM-based TTS backbone to generate diverse, expressive, and contextually coherent speech. A diffusion-based context-aware prosody predictor is proposed to sample diverse prosody embeddings conditioned on multimodal conversational context. Then a prosody-controllable LM-based TTS backbone is developed to synthesize high-quality speech with sampled prosody embeddings. Experimental results demonstrate that the synthesized speech from DiffCSS is more diverse, contextually coherent, and expressive than existing CSS systems

View on arXiv
@article{wu2025_2502.19924,
  title={ DiffCSS: Diverse and Expressive Conversational Speech Synthesis with Diffusion Models },
  author={ Weihao wu and Zhiwei Lin and Yixuan Zhou and Jingbei Li and Rui Niu and Qinghua Wu and Songjun Cao and Long Ma and Zhiyong Wu },
  journal={arXiv preprint arXiv:2502.19924},
  year={ 2025 }
}
Comments on this paper