52
0

Mutarjim: Advancing Bidirectional Arabic-English Translation with a Small Language Model

Main:12 Pages
4 Figures
Bibliography:4 Pages
10 Tables
Appendix:3 Pages
Abstract

We introduce Mutarjim, a compact yet powerful language model for bidirectional Arabic-English translation. While large-scale LLMs have shown impressive progress in natural language processing tasks, including machine translation, smaller models. Leveraging this insight, we developed Mutarjim based on Kuwain-1.5B , a language model tailored for both Arabic and English. Despite its modest size, Mutarjim outperforms much larger models on several established benchmarks, achieved through an optimized two-phase training approach and a carefully curated, high-quality training corpus.. Experimental results show that Mutarjim rivals models up to 20 times larger while significantly reducing computational costs and training requirements. We also introduce Tarjama-25, a new benchmark designed to overcome limitations in existing Arabic-English benchmarking datasets, such as domain narrowness, short sentence lengths, and English-source bias. Tarjama-25 comprises 5,000 expert-reviewed sentence pairs and spans a wide range of domains, offering a more comprehensive and balanced evaluation framework. Notably, Mutarjim achieves state-of-the-art performance on the English-to-Arabic task in Tarjama-25, surpassing even significantly larger and proprietary models like GPT-4o mini. We publicly release Tarjama-25 to support future research and advance the evaluation of Arabic-English translation systems.

View on arXiv
@article{hennara2025_2505.17894,
  title={ Mutarjim: Advancing Bidirectional Arabic-English Translation with a Small Language Model },
  author={ Khalil Hennara and Muhammad Hreden and Mohamed Motaism Hamed and Zeina Aldallal and Sara Chrouf and Safwan AlModhayan },
  journal={arXiv preprint arXiv:2505.17894},
  year={ 2025 }
}
Comments on this paper