24

Agreement Between Large Language Models and Human Raters in Essay Scoring: A Research Synthesis

Hongli Li
Che Han Chen
Kevin Fan
Chiho Young-Johnson
Soyoung Lim
Yali Feng
Main:18 Pages
1 Figures
2 Tables
Abstract

Despite the growing promise of large language models (LLMs) in automatic essay scoring (AES), empirical findings regarding their reliability compared to human raters remain mixed. Following the PRISMA 2020 guidelines, we synthesized 65 published and unpublished studies from January 2022 to August 2025 that examined agreement between LLMs and human raters in AES. Across studies, reported LLM-human agreement was generally moderate to good, with agreement indices (e.g., Quadratic Weighted Kappa, Pearson correlation, and Spearman's rho) mostly ranging between 0.30 and 0.80. Substantial variability in agreement levels was observed across studies, reflecting differences in study-specific factors as well as the lack of standardized reporting practices. Implications and directions for future research are discussed.

View on arXiv
Comments on this paper