ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations
- ELM

Language models today are widely used in education, yet their ability to tailor responses for learners with varied informational needs and knowledge backgrounds remains under-explored. To this end, we introduce ELI-Why, a benchmark of 13.4K "Why" questions to evaluate the pedagogical capabilities of language models. We then conduct two extensive human studies to assess the utility of language model-generated explanatory answers (explanations) on our benchmark, tailored to three distinct educational grades: elementary, high-school and graduate school. In our first study, human raters assume the role of an "educator" to assess model explanations' fit to different educational grades. We find that GPT-4-generated explanations match their intended educational background only 50% of the time, compared to 79% for lay human-curated explanations. In our second study, human raters assume the role of a learner to assess if an explanation fits their own informational needs. Across all educational backgrounds, users deemed GPT-4-generated explanations 20% less suited on average to their informational needs, when compared to explanations curated by lay people. Additionally, automated evaluation metrics reveal that explanations generated across different language model families for different informational needs remain indistinguishable in their grade-level, limiting their pedagogical effectiveness.
View on arXiv@article{joshi2025_2506.14200, title={ ELI-Why: Evaluating the Pedagogical Utility of Language Model Explanations }, author={ Brihi Joshi and Keyu He and Sahana Ramnath and Sadra Sabouri and Kaitlyn Zhou and Souti Chattopadhyay and Swabha Swayamdipta and Xiang Ren }, journal={arXiv preprint arXiv:2506.14200}, year={ 2025 } }