109
0

Enhancing the Learning Experience: Using Vision-Language Models to Generate Questions for Educational Videos

Main:12 Pages
Bibliography:5 Pages
10 Tables
Abstract

Web-based educational videos offer flexible learning opportunities and are becoming increasingly popular. However, improving user engagement and knowledge retention remains a challenge. Automatically generated questions can activate learners and support their knowledge acquisition. Further, they can help teachers and learners assess their understanding. While large language and vision-language models have been employed in various tasks, their application to question generation for educational videos remains underexplored. In this paper, we investigate the capabilities of current vision-language models for generating learning-oriented questions for educational video content. We assess (1) out-of-the-box models' performance; (2) fine-tuning effects on content-specific question generation; (3) the impact of different video modalities on question quality; and (4) in a qualitative study, question relevance, answerability, and difficulty levels of generated questions. Our findings delineate the capabilities of current vision-language models, highlighting the need for fine-tuning and addressing challenges in question diversity and relevance. We identify requirements for future multimodal datasets and outline promising research directions.

View on arXiv
@article{stamatakis2025_2505.01790,
  title={ Enhancing the Learning Experience: Using Vision-Language Models to Generate Questions for Educational Videos },
  author={ Markos Stamatakis and Joshua Berger and Christian Wartena and Ralph Ewerth and Anett Hoppe },
  journal={arXiv preprint arXiv:2505.01790},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.