13
0

Bi-directional Context-Enhanced Speech Large Language Models for Multilingual Conversational ASR

Main:4 Pages
1 Figures
Bibliography:1 Pages
3 Tables
Abstract

This paper introduces the integration of language-specific bi-directional context into a speech large language model (SLLM) to improve multilingual continuous conversational automatic speech recognition (ASR). We propose a character-level contextual masking strategy during training, which randomly removes portions of the context to enhance robustness and better emulate the flawed transcriptions that may occur during inference. For decoding, a two-stage pipeline is utilized: initial isolated segment decoding followed by context-aware re-decoding using neighboring hypotheses. Evaluated on the 1500-hour Multilingual Conversational Speech and Language Model (MLC-SLM) corpus covering eleven languages, our method achieves an 18% relative improvement compared to a strong baseline, outperforming even the model trained on 6000 hours of data for the MLC-SLM competition. These results underscore the significant benefit of incorporating contextual information in multilingual continuous conversational ASR.

View on arXiv
@article{peng2025_2506.13396,
  title={ Bi-directional Context-Enhanced Speech Large Language Models for Multilingual Conversational ASR },
  author={ Yizhou Peng and Hexin Liu and Eng Siong Chng },
  journal={arXiv preprint arXiv:2506.13396},
  year={ 2025 }
}
Comments on this paper