RDF-Based Structured Quality Assessment Representation of Multilingual LLM Evaluations

Large Language Models (LLMs) increasingly serve as knowledge interfaces, yet systematically assessing their reliability with conflicting information remains difficult. We propose an RDF-based framework to assess multilingual LLM quality, focusing on knowledge conflicts. Our approach captures model responses across four distinct context conditions (complete, incomplete, conflicting, and no-context information) in German and English. This structured representation enables the comprehensive analysis of knowledge leakage-where models favor training data over provided context-error detection, and multilingual consistency. We demonstrate the framework through a fire safety domain experiment, revealing critical patterns in context prioritization and language-specific performance, and demonstrating that our vocabulary was sufficient to express every assessment facet encountered in the 28-question study.
View on arXiv@article{gwozdz2025_2504.21605, title={ RDF-Based Structured Quality Assessment Representation of Multilingual LLM Evaluations }, author={ Jonas Gwozdz and Andreas Both }, journal={arXiv preprint arXiv:2504.21605}, year={ 2025 } }