ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06424
47
1

Training LLM-based Tutors to Improve Student Learning Outcomes in Dialogues

9 March 2025
Alexander Scarlatos
Naiming Liu
Jaewook Lee
Richard Baraniuk
Andrew S. Lan
ArXivPDFHTML
Abstract

Generative artificial intelligence (AI) has the potential to scale up personalized tutoring through large language models (LLMs). Recent AI tutors are adapted for the tutoring task by training or prompting LLMs to follow effective pedagogical principles, though they are not trained to maximize student learning throughout the course of a dialogue. Therefore, they may engage with students in a suboptimal way. We address this limitation by introducing an approach to train LLMs to generate tutor utterances that maximize the likelihood of student correctness, while still encouraging the model to follow good pedagogical practice. Specifically, we generate a set of candidate tutor utterances and score them using (1) an LLM-based student model to predict the chance of correct student responses and (2) a pedagogical rubric evaluated by GPT-4o. We then use the resulting data to train an open-source LLM, Llama 3.1 8B, using direct preference optimization. We show that tutor utterances generated by our model lead to significantly higher chances of correct student responses while maintaining the pedagogical quality of GPT-4o. We also conduct qualitative analyses and a human evaluation to demonstrate that our model generates high quality tutor utterances.

View on arXiv
@article{scarlatos2025_2503.06424,
  title={ Training LLM-based Tutors to Improve Student Learning Outcomes in Dialogues },
  author={ Alexander Scarlatos and Naiming Liu and Jaewook Lee and Richard Baraniuk and Andrew Lan },
  journal={arXiv preprint arXiv:2503.06424},
  year={ 2025 }
}
Comments on this paper