ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10913
7
0

Automated Identification of Logical Errors in Programs: Advancing Scalable Analysis of Student Misconceptions

16 May 2025
Muntasir Hoq
Ananya Rao
Reisha Jaishankar
Krish Piryani
Nithya Janapati
Jessica Vandenberg
Bradford Mott
Narges Norouzi
James Lester
Bita Akram
ArXivPDFHTML
Abstract

In Computer Science (CS) education, understanding factors contributing to students' programming difficulties is crucial for effective learning support. By identifying specific issues students face, educators can provide targeted assistance to help them overcome obstacles and improve learning outcomes. While identifying sources of struggle, such as misconceptions, in real-time can be challenging in current educational practices, analyzing logical errors in students' code can offer valuable insights. This paper presents a scalable framework for automatically detecting logical errors in students' programming solutions. Our framework is based on an explainable Abstract Syntax Tree (AST) embedding model, the Subtree-based Attention Neural Network (SANN), that identifies the structural components of programs containing logical errors. We conducted a series of experiments to evaluate its effectiveness, and the results suggest that our framework can accurately capture students' logical errors and, more importantly, provide us with deeper insights into their learning processes, offering a valuable tool for enhancing programming education.

View on arXiv
@article{hoq2025_2505.10913,
  title={ Automated Identification of Logical Errors in Programs: Advancing Scalable Analysis of Student Misconceptions },
  author={ Muntasir Hoq and Ananya Rao and Reisha Jaishankar and Krish Piryani and Nithya Janapati and Jessica Vandenberg and Bradford Mott and Narges Norouzi and James Lester and Bita Akram },
  journal={arXiv preprint arXiv:2505.10913},
  year={ 2025 }
}
Comments on this paper