From Guidelines to Practice: A New Paradigm for Arabic Language Model Evaluation
- ELM

This paper addresses critical gaps in Arabic language model evaluation by establishing comprehensive theoretical guidelines and introducing a novel evaluation framework. We first analyze existing Arabic evaluation datasets, identifying significant issues in linguistic accuracy, cultural alignment, and methodological rigor. To address these limitations in LLMs, we present the Arabic Depth Mini Dataset (ADMD), a carefully curated collection of 490 challenging questions spanning ten major domains (42 sub-domains, see Figure 1. Using ADMD, we evaluate five leading language models: GPT-4, Claude 3.5 Sonnet, Gemini Flash 1.5, CommandR 100B, and Qwen-Max. Our results reveal significant variations in model performance across different domains, with particular challenges in areas requiring deep cultural understanding and specialized knowledge. Claude 3.5 Sonnet demonstrated the highest overall accuracy at 30\%, showing relative strength in mathematical theory in Arabic, Arabic language, and islamic domains. This work provides both theoretical foundations and practical insights for improving Arabic language model evaluation, emphasizing the importance of cultural competence alongside technical capabilities.
View on arXiv@article{sibaee2025_2506.01920, title={ From Guidelines to Practice: A New Paradigm for Arabic Language Model Evaluation }, author={ Serry Sibaee and Omer Nacar and Adel Ammar and Yasser Al-Habashi and Abdulrahman Al-Batati and Wadii Boulila }, journal={arXiv preprint arXiv:2506.01920}, year={ 2025 } }