Peer review is vital in academia for evaluating research quality. Top AI conferences use reviewer confidence scores to ensure review reliability, but existing studies lack fine-grained analysis of text-score consistency, potentially missing key details. This work assesses consistency at word, sentence, and aspect levels using deep learning and NLP conference review data. We employ deep learning to detect hedge sentences and aspects, then analyze report length, hedge word/sentence frequency, aspect mentions, and sentiment to evaluate text-score alignment. Correlation, significance, and regression tests examine confidence scores' impact on paper outcomes. Results show high text-score consistency across all levels, with regression revealing higher confidence scores correlate with paper rejection, validating expert assessments and peer review fairness.
View on arXiv@article{wu2025_2505.15031, title={ Are the confidence scores of reviewers consistent with the review content? Evidence from top conference proceedings in AI }, author={ Wenqing Wu and Haixu Xi and Chengzhi Zhang }, journal={arXiv preprint arXiv:2505.15031}, year={ 2025 } }