ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08400
39
0
v1v2 (latest)

mSTEB: Massively Multilingual Evaluation of LLMs on Speech and Text Tasks

10 June 2025
Luel Hagos Beyene
Vivek Verma
Min Ma
Jesujoba Oluwadara Alabi
Fabian David Schmidt
Joyce Nakatumba-Nabende
David Ifeoluwa Adelani
ArXiv (abs)PDFHTML
Abstract

Large Language models (LLMs) have demonstrated impressive performance on a wide range of tasks, including in multimodal settings such as speech. However, their evaluation is often limited to English and a few high-resource languages. For low-resource languages, there is no standardized evaluation benchmark. In this paper, we address this gap by introducing mSTEB, a new benchmark to evaluate the performance of LLMs on a wide range of tasks covering language identification, text classification, question answering, and translation tasks on both speech and text modalities. We evaluated the performance of leading LLMs such as Gemini 2.0 Flash and GPT-4o (Audio) and state-of-the-art open models such as Qwen 2 Audio and Gemma 3 27B. Our evaluation shows a wide gap in performance between high-resource and low-resource languages, especially for languages spoken in Africa and Americas/Oceania. Our findings show that more investment is needed to address their under-representation in LLMs coverage.

View on arXiv
@article{beyene2025_2506.08400,
  title={ mSTEB: Massively Multilingual Evaluation of LLMs on Speech and Text Tasks },
  author={ Luel Hagos Beyene and Vivek Verma and Min Ma and Jesujoba O. Alabi and Fabian David Schmidt and Joyce Nakatumba-Nabende and David Ifeoluwa Adelani },
  journal={arXiv preprint arXiv:2506.08400},
  year={ 2025 }
}
Main:6 Pages
3 Figures
Bibliography:2 Pages
5 Tables
Comments on this paper