106
0

Scheduled Interleaved Speech-Text Training for Speech-to-Speech Translation with LLMs

Main:4 Pages
5 Figures
Bibliography:1 Pages
3 Tables
Abstract

Speech-to-speech translation (S2ST) has been advanced with large language models (LLMs), which are fine-tuned on discrete speech units. In such approaches, modality adaptation from text to speech has been an issue. LLMs are trained on text-only data, which presents challenges to adapt them to speech modality with limited speech-to-speech data. To address the training difficulty, we propose scheduled interleaved speech--text training in this study. We use interleaved speech--text units instead of speech units during training, where aligned text tokens are interleaved at the word level. We gradually decrease the ratio of text as training progresses, to facilitate progressive modality adaptation from text to speech. We conduct experimental evaluations by fine-tuning LLaMA3.2-1B for S2ST on the CVSS dataset. We show that the proposed method consistently improves the translation performances, especially for languages with limited training data.

View on arXiv
@article{futami2025_2506.10299,
  title={ Scheduled Interleaved Speech-Text Training for Speech-to-Speech Translation with LLMs },
  author={ Hayato Futami and Emiru Tsunoo and Yosuke Kashiwagi and Yuki Ito and Hassan Shahmohammadi and Siddhant Arora and Shinji Watanabe },
  journal={arXiv preprint arXiv:2506.10299},
  year={ 2025 }
}
Comments on this paper