30

CulturALL: Benchmarking Multilingual and Multicultural Competence of LLMs on Grounded Tasks

Peiqin Lin
Chenyang Lyu
Wenjiang Luo
Haotian Ye
Md Mehrab Hossain
Chunlan Ma
Shaoxiong Ji
Younes Samih
Bo Zeng
Fan Jiang
Yuanbin Cao
Dilda Duisenbek
Adrian Neo Sau Xun
Daria Pozdniakova
Liubou Misevich
Nevena Marinković
Ngoc Gia Linh Nguyen
Thi Khanh Linh Do
Sarakmatak Sophy
Baotian Hu
Guanhua Chen
Gongbo Tang
Alham Fikri Aji
Longyue Wang
Weihua Luo
Main:8 Pages
15 Figures
Bibliography:3 Pages
6 Tables
Appendix:5 Pages
Abstract

Large language models (LLMs) are now deployed worldwide, inspiring a surge of benchmarks that measure their multilingual and multicultural abilities. However, these benchmarks prioritize generic language understanding or superficial cultural trivia, leaving the evaluation of grounded tasks -- where models must reason within real-world, context-rich scenarios -- largely unaddressed. To fill this gap, we present CulturALL, a comprehensive and challenging benchmark to assess LLMs' multilingual and multicultural competence on grounded tasks. CulturALL is built via a human--AI collaborative framework: expert annotators ensure appropriate difficulty and factual accuracy, while LLMs lighten the manual workload. By incorporating diverse sources, CulturALL ensures comprehensive scenario coverage. Each item is carefully designed to present a high level of difficulty, making CulturALL challenging. CulturALL contains 2,610 samples in 14 languages from 51 regions, distributed across 16 topics to capture the full breadth of grounded tasks. Experiments show that the best LLM achieves 44.48% accuracy on CulturALL, underscoring substantial room for improvement.

View on arXiv
Comments on this paper