ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13882
24
0

Toward Automated Qualitative Analysis: Leveraging Large Language Models for Tutoring Dialogue Evaluation

3 April 2025
Megan Gu
Chloe Qianhui Zhao
Claire Liu
Nikhil Patel
Jahnvi Shah
Jionghao Lin
Kenneth R. Koedinger
ArXiv (abs)PDFHTML
Main:2 Pages
1 Figures
Bibliography:2 Pages
1 Tables
Abstract

Our study introduces an automated system leveraging large language models (LLMs) to assess the effectiveness of five key tutoring strategies: 1. giving effective praise, 2. reacting to errors, 3. determining what students know, 4. helping students manage inequity, and 5. responding to negative self-talk. Using a public dataset from the Teacher-Student Chatroom Corpus, our system classifies each tutoring strategy as either being employed as desired or undesired. Our study utilizes GPT-3.5 with few-shot prompting to assess the use of these strategies and analyze tutoring dialogues. The results show that for the five tutoring strategies, True Negative Rates (TNR) range from 0.655 to 0.738, and Recall ranges from 0.327 to 0.432, indicating that the model is effective at excluding incorrect classifications but struggles to consistently identify the correct strategy. The strategy \textit{helping students manage inequity} showed the highest performance with a TNR of 0.738 and Recall of 0.432. The study highlights the potential of LLMs in tutoring strategy analysis and outlines directions for future improvements, including incorporating more advanced models for more nuanced feedback.

View on arXiv
@article{gu2025_2504.13882,
  title={ Toward Automated Qualitative Analysis: Leveraging Large Language Models for Tutoring Dialogue Evaluation },
  author={ Megan Gu and Chloe Qianhui Zhao and Claire Liu and Nikhil Patel and Jahnvi Shah and Jionghao Lin and Kenneth R. Koedinger },
  journal={arXiv preprint arXiv:2504.13882},
  year={ 2025 }
}
Comments on this paper