ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13339
20
0
v1v2 (latest)

NTU Speechlab LLM-Based Multilingual ASR System for Interspeech MLC-SLM Challenge 2025

16 June 2025
Yizhou Peng
Bin Wang
Yi-Wen Chao
Ziyang Ma
Haoyang Zhang
Hexin Liu
Xie Chen
Eng Siong Chng
    ELM
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:1 Pages
4 Tables
Abstract

This report details the NTU Speechlab system developed for the Interspeech 2025 Multilingual Conversational Speech and Language Model (MLC-SLM) Challenge (Task I), where we achieved 5th place. We present comprehensive analyses of our multilingual automatic speech recognition system, highlighting key advancements in model architecture, data selection, and training strategies. In particular, language-specific prompts and model averaging techniques were instrumental in boosting system performance across diverse languages. Compared to the initial baseline system, our final model reduced the average Mix Error Rate from 20.2% to 10.6%, representing an absolute improvement of 9.6% (a relative improvement of 48%) on the evaluation set. Our results demonstrate the effectiveness of our approach and offer practical insights for future Speech Large Language Models.

View on arXiv
@article{peng2025_2506.13339,
  title={ NTU Speechlab LLM-Based Multilingual ASR System for Interspeech MLC-SLM Challenge 2025 },
  author={ Yizhou Peng and Bin Wang and Yi-Wen Chao and Ziyang Ma and Haoyang Zhang and Hexin Liu and Xie Chen and Eng Siong Chng },
  journal={arXiv preprint arXiv:2506.13339},
  year={ 2025 }
}
Comments on this paper