Evaluating Large Language Model with Knowledge Oriented Language Specific Simple Question Answering
- ELM

We introduce KoLasSimpleQA, the first benchmark evaluating the multilingual factual ability of Large Language Models (LLMs). Inspired by existing research, we created the question set with features such as single knowledge point coverage, absolute objectivity, unique answers, and temporal stability. These questions enable efficient evaluation using the LLM-as-judge paradigm, testing both the LLMs' factual memory and self-awareness ("know what they don't know"). KoLasSimpleQA expands existing research in two key dimensions: (1) Breadth (Multilingual Coverage): It includes 9 languages, supporting global applicability evaluation. (2) Depth (Dual Domain Design): It covers both the general domain (global facts) and the language-specific domain (such as history, culture, and regional traditions) for a comprehensive assessment of multilingual capabilities. We evaluated mainstream LLMs, including traditional LLM and emerging Large Reasoning Models. Results show significant performance differences between the two domains, particularly in performance metrics, ranking, calibration, and robustness. This highlights the need for targeted evaluation and optimization in multilingual contexts. We hope KoLasSimpleQA will help the research community better identify LLM capability boundaries in multilingual contexts and provide guidance for model optimization. We will release KoLasSimpleQA atthis https URL.
View on arXiv@article{jiang2025_2505.16591, title={ Evaluating Large Language Model with Knowledge Oriented Language Specific Simple Question Answering }, author={ Bowen Jiang and Runchuan Zhu and Jiang Wu and Zinco Jiang and Yifan He and Junyuan Gao and Jia Yu and Rui Min and Yinfan Wang and Haote Yang and Songyang Zhang and Dahua Lin and Lijun Wu and Conghui He }, journal={arXiv preprint arXiv:2505.16591}, year={ 2025 } }