24
0

Ranking-Based At-Risk Student Prediction Using Federated Learning and Differential Features

Abstract

Digital textbooks are widely used in various educational contexts, such as university courses and online lectures. Such textbooks yield learning log data that have been used in numerous educational data mining (EDM) studies for student behavior analysis and performance prediction. However, these studies have faced challenges in integrating confidential data, such as academic records and learning logs, across schools due to privacy concerns. Consequently, analyses are often conducted with data limited to a single school, which makes developing high-performing and generalizable models difficult. This study proposes a method that combines federated learning and differential features to address these issues. Federated learning enables model training without centralizing data, thereby preserving student privacy. Differential features, which utilize relative values instead of absolute values, enhance model performance and generalizability. To evaluate the proposed method, a model for predicting at-risk students was trained using data from 1,136 students across 12 courses conducted over 4 years, and validated on hold-out test data from 5 other courses. Experimental results demonstrated that the proposed method addresses privacy concerns while achieving performance comparable to that of models trained via centralized learning in terms of Top-n precision, nDCG, and PR-AUC. Furthermore, using differential features improved prediction performance across all evaluation datasets compared to non-differential approaches. The trained models were also applicable for early prediction, achieving high performance in detecting at-risk students in earlier stages of the semester within the validation datasets.

View on arXiv
@article{yoneda2025_2505.09287,
  title={ Ranking-Based At-Risk Student Prediction Using Federated Learning and Differential Features },
  author={ Shunsuke Yoneda and Valdemar Švábenský and Gen Li and Daisuke Deguchi and Atsushi Shimada },
  journal={arXiv preprint arXiv:2505.09287},
  year={ 2025 }
}
Comments on this paper