24
0

Efficient and Direct Duplex Modeling for Speech-to-Speech Language Model

Abstract

Spoken dialogue is an intuitive form of human-computer interaction, yet current speech language models often remain constrained to turn-based exchanges, lacking real-time adaptability such as user barge-in. We propose a novel duplex speech to speech (S2S) architecture featuring continuous user inputs and codec agent outputs with channel fusion that directly models simultaneous user and agent streams. Using a pretrained streaming encoder for user input enables the first duplex S2S model without requiring speech pretrain. Separate architectures for agent and user modeling facilitate codec fine-tuning for better agent voices and halve the bitrate (0.6 kbps) compared to previous works. Experimental results show that the proposed model outperforms previous duplex models in reasoning, turn-taking, and barge-in abilities. The model requires significantly less speech data, as speech pretrain is skipped, which markedly simplifies the process of building a duplex S2S model from any LLMs. Finally, it is the first openly available duplex S2S model with training and inference code to foster reproducibility.

View on arXiv
@article{hu2025_2505.15670,
  title={ Efficient and Direct Duplex Modeling for Speech-to-Speech Language Model },
  author={ Ke Hu and Ehsan Hosseini-Asl and Chen Chen and Edresson Casanova and Subhankar Ghosh and Piotr Żelasko and Zhehuai Chen and Jason Li and Jagadeesh Balam and Boris Ginsburg },
  journal={arXiv preprint arXiv:2505.15670},
  year={ 2025 }
}
Comments on this paper