31
2

Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech

Abstract

This work studies the capabilities of a large language model (LLM) to understand paralinguistic aspects of speech without fine-tuning its weights. We utilize an end-to-end system with a speech encoder, which is trained to produce token embeddings such that the LLM's response to an expressive speech prompt is aligned with its response to a semantically matching text prompt that has also been conditioned on the user's speaking style. This framework enables the encoder to generate tokens that capture both linguistic and paralinguistic information and effectively convey them to the LLM, even when the LLM's weights remain completely frozen. To the best of our knowledge, our work is the first to explore how to induce a frozen LLM to understand more than just linguistic content from speech inputs in a general interaction setting. Experiments demonstrate that our system is able to produce higher quality and more empathetic responses to expressive speech prompts compared to several baselines.

View on arXiv
@article{kang2025_2410.01162,
  title={ Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech },
  author={ Wonjune Kang and Junteng Jia and Chunyang Wu and Wei Zhou and Egor Lakomkin and Yashesh Gaur and Leda Sari and Suyoun Kim and Ke Li and Jay Mahadeokar and Ozlem Kalinli },
  journal={arXiv preprint arXiv:2410.01162},
  year={ 2025 }
}
Comments on this paper