46
0

A Culturally-Rich Romanian NLP Dataset from "Who Wants to Be a Millionaire?" Videos

Main:7 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Appendix:1 Pages
Abstract

Large Language Models (LLMs) demonstrate varying performance across languages and cultural contexts. This study introduces a novel, culturally-rich, multilingual dataset derived from video recordings of the Romanian game show "Who Wants to Be a Millionaire?" (Vrei să fii Milionar?). We employed an innovative process combining optical character recognition (OCR), automated text extraction, and manual verification to collect question-answer pairs, enriching them with metadata including question domain (e.g., biology, history), cultural relevance (Romanian-specific vs. international), and difficulty. Benchmarking state-of-the-art LLMs, including Romanian-adapted models, on this dataset revealed significant performance disparities: models consistently achieve higher accuracy (80-95%) on international questions compared to Romanian-specific cultural questions (50-75%). We further investigate these differences through experiments involving machine translation of Romanian questions into English and cross-lingual tests using a comparable dataset in French. Our findings underscore the impact of cultural context and data source on LLM performance and offer practical insights for building robust, culturally-aware multilingual NLP systems, especially in educational domains. The dataset is publicly available at Hugging Face.

View on arXiv
@article{ganea2025_2506.05991,
  title={ A Culturally-Rich Romanian NLP Dataset from "Who Wants to Be a Millionaire?" Videos },
  author={ Alexandru-Gabriel Ganea and Antonia-Adelina Popovici and Adrian-Marius Dumitran },
  journal={arXiv preprint arXiv:2506.05991},
  year={ 2025 }
}
Comments on this paper