ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15667
12
0

Segmentation-Variant Codebooks for Preservation of Paralinguistic and Prosodic Information

21 May 2025
Nicholas Sanders
Yuanchao Li
Korin Richmond
Simon King
ArXivPDFHTML
Abstract

Quantization in SSL speech models (e.g., HuBERT) improves compression and performance in tasks like language modeling, resynthesis, and text-to-speech but often discards prosodic and paralinguistic information (e.g., emotion, prominence). While increasing codebook size mitigates some loss, it inefficiently raises bitrates. We propose Segmentation-Variant Codebooks (SVCs), which quantize speech at distinct linguistic units (frame, phone, word, utterance), factorizing it into multiple streams of segment-specific discrete features. Our results show that SVCs are significantly more effective at preserving prosodic and paralinguistic information across probing tasks. Additionally, we find that pooling before rather than after discretization better retains segment-level information. Resynthesis experiments further confirm improved style realization and slightly improved quality while preserving intelligibility.

View on arXiv
@article{sanders2025_2505.15667,
  title={ Segmentation-Variant Codebooks for Preservation of Paralinguistic and Prosodic Information },
  author={ Nicholas Sanders and Yuanchao Li and Korin Richmond and Simon King },
  journal={arXiv preprint arXiv:2505.15667},
  year={ 2025 }
}
Comments on this paper