ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer

To achieve equitable performance across languages, multilingual large language models (LLMs) must be able to abstract knowledge beyond the language in which it was acquired. However, the current literature lacks reliable ways to measure LLMs' capability of cross-lingual knowledge transfer. To that end, we present ECLeKTic, a multilingual closed-book QA (CBQA) dataset that Evaluates Cross-Lingual Knowledge Transfer in a simple, black-box manner. We detected information with uneven coverage across languages by controlling for presence and absence of Wikipedia articles in 12 languages. We generated knowledge-seeking questions in a source language, for which the answer appears in a relevant Wikipedia article and translated them to all other 11 languages, for which the respective Wikipedias lack equivalent articles. Assuming that Wikipedia reflects the prominent knowledge in the LLM's training data, to solve ECLeKTic's CBQA task the model is required to transfer knowledge between languages. Experimenting with 8 LLMs, we show that SOTA models struggle to effectively share knowledge across, languages even if they can predict the answer well for queries in the same language the knowledge was acquired in.
View on arXiv@article{goldman2025_2502.21228, title={ ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer }, author={ Omer Goldman and Uri Shaham and Dan Malkin and Sivan Eiger and Avinatan Hassidim and Yossi Matias and Joshua Maynez and Adi Mayrav Gilady and Jason Riesa and Shruti Rijhwani and Laura Rimell and Idan Szpektor and Reut Tsarfaty and Matan Eyal }, journal={arXiv preprint arXiv:2502.21228}, year={ 2025 } }