Quantization in SSL speech models (e.g., HuBERT) improves compression and performance in tasks like language modeling, resynthesis, and text-to-speech but often discards prosodic and paralinguistic information (e.g., emotion, prominence). While increasing codebook size mitigates some loss, it inefficiently raises bitrates. We propose Segmentation-Variant Codebooks (SVCs), which quantize speech at distinct linguistic units (frame, phone, word, utterance), factorizing it into multiple streams of segment-specific discrete features. Our results show that SVCs are significantly more effective at preserving prosodic and paralinguistic information across probing tasks. Additionally, we find that pooling before rather than after discretization better retains segment-level information. Resynthesis experiments further confirm improved style realization and slightly improved quality while preserving intelligibility.
View on arXiv@article{sanders2025_2505.15667, title={ Segmentation-Variant Codebooks for Preservation of Paralinguistic and Prosodic Information }, author={ Nicholas Sanders and Yuanchao Li and Korin Richmond and Simon King }, journal={arXiv preprint arXiv:2505.15667}, year={ 2025 } }