200

Grading Scale Impact on LLM-as-a-Judge: Human-LLM Alignment Is Highest on 0-5 Grading Scale

Weiyue Li
Minda Zhao
Weixuan Dong
Jiahui Cai
Yuze Wei
Michael Pocress
Yi Li
Wanyan Yuan
Xiaoyue Wang
Ruoyu Hou
Kaiyuan Lou
Wenqi Zeng
Yutong Yang
Yilun Du
Mengyu Wang
Main:7 Pages
3 Figures
Bibliography:4 Pages
19 Tables
Appendix:8 Pages
Abstract

Large language models (LLMs) are increasingly used as automated evaluators, yet prior works demonstrate that these LLM judges often lack consistency in scoring when the prompt is altered. However, the effect of the grading scale itself remains underexplored. We study the LLM-as-a-judge problem by comparing two kinds of raters: humans and LLMs. We collect ratings from both groups on three scales and across six benchmarks that include objective, open-ended subjective, and mixed tasks. Using intraclass correlation coefficients (ICC) to measure absolute agreement, we find that LLM judgments are not perfectly consistent across scales on subjective benchmarks, and that the choice of scale substantially shifts human-LLM agreement, even when within-group panel reliability is high. Aggregated over tasks, the grading scale of 0-5 yields the strongest human-LLM alignment. We further demonstrate that pooled reliability can mask benchmark heterogeneity and reveal systematic subgroup differences in alignment across gender groups, strengthening the importance of scale design and sub-level diagnostics as essential components of LLM-as-a-judge protocols.

View on arXiv
Comments on this paper