12
0

Artificial Intelligence Bias on English Language Learners in Automatic Scoring

Abstract

This study investigated potential scoring biases and disparities toward English Language Learners (ELLs) when using automatic scoring systems for middle school students' written responses to science assessments. We specifically focus on examining how unbalanced training data with ELLs contributes to scoring bias and disparities. We fine-tuned BERT with four datasets: responses from (1) ELLs, (2) non-ELLs, (3) a mixed dataset reflecting the real-world proportion of ELLs and non-ELLs (unbalanced), and (4) a balanced mixed dataset with equal representation of both groups. The study analyzed 21 assessment items: 10 items with about 30,000 ELL responses, five items with about 1,000 ELL responses, and six items with about 200 ELL responses. Scoring accuracy (Acc) was calculated and compared to identify bias using Friedman tests. We measured the Mean Score Gaps (MSGs) between ELLs and non-ELLs and then calculated the differences in MSGs generated through both the human and AI models to identify the scoring disparities. We found that no AI bias and distorted disparities between ELLs and non-ELLs were found when the training dataset was large enough (ELL = 30,000 and ELL = 1,000), but concerns could exist if the sample size is limited (ELL = 200).

View on arXiv
@article{guo2025_2505.10643,
  title={ Artificial Intelligence Bias on English Language Learners in Automatic Scoring },
  author={ Shuchen Guo and Yun Wang and Jichao Yu and Xuansheng Wu and Bilgehan Ayik and Field M. Watts and Ehsan Latif and Ninghao Liu and Lei Liu and Xiaoming Zhai },
  journal={arXiv preprint arXiv:2505.10643},
  year={ 2025 }
}
Comments on this paper