LLM4CD: Leveraging Large Language Models for Open-World Knowledge Augmented Cognitive Diagnosis

Cognitive diagnosis (CD) plays a crucial role in intelligent education, evaluating students' comprehension of knowledge concepts based on their test histories. However, current CD methods often model students, exercises, and knowledge concepts solely on their ID relationships, neglecting the abundant semantic relationships present within educational data space. Furthermore, contemporary intelligent tutoring systems (ITS) frequently involve the addition of new students and exercises, a situation that ID-based methods find challenging to manage effectively. The advent of large language models (LLMs) offers the potential for overcoming this challenge with open-world knowledge. In this paper, we propose LLM4CD, which Leverages Large Language Models for Open-World Knowledge Augmented Cognitive Diagnosis. Our method utilizes the open-world knowledge of LLMs to construct cognitively expressive textual representations, which are then encoded to introduce rich semantic information into the CD task. Additionally, we propose an innovative bi-level encoder framework that models students' test histories through two levels of encoders: a macro-level cognitive text encoder and a micro-level knowledge state encoder. This approach substitutes traditional ID embeddings with semantic representations, enabling the model to accommodate new students and exercises with open-world knowledge and address the cold-start problem. Extensive experimental results demonstrate that our proposed method consistently outperforms previous CD models on multiple real-world datasets, validating the effectiveness of leveraging LLMs to introduce rich semantic information into the CD task.
View on arXiv@article{zhang2025_2505.13492, title={ LLM4CD: Leveraging Large Language Models for Open-World Knowledge Augmented Cognitive Diagnosis }, author={ Weiming Zhang and Lingyue Fu and Qingyao Li and Kounianhua Du and Jianghao Lin and Jingwei Yu and Wei Xia and Weinan Zhang and Ruiming Tang and Yong Yu }, journal={arXiv preprint arXiv:2505.13492}, year={ 2025 } }