ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15031
17
5

Are the confidence scores of reviewers consistent with the review content? Evidence from top conference proceedings in AI

21 May 2025
Wenqing Wu
Haixu Xi
Chengzhi Zhang
ArXivPDFHTML
Abstract

Peer review is vital in academia for evaluating research quality. Top AI conferences use reviewer confidence scores to ensure review reliability, but existing studies lack fine-grained analysis of text-score consistency, potentially missing key details. This work assesses consistency at word, sentence, and aspect levels using deep learning and NLP conference review data. We employ deep learning to detect hedge sentences and aspects, then analyze report length, hedge word/sentence frequency, aspect mentions, and sentiment to evaluate text-score alignment. Correlation, significance, and regression tests examine confidence scores' impact on paper outcomes. Results show high text-score consistency across all levels, with regression revealing higher confidence scores correlate with paper rejection, validating expert assessments and peer review fairness.

View on arXiv
@article{wu2025_2505.15031,
  title={ Are the confidence scores of reviewers consistent with the review content? Evidence from top conference proceedings in AI },
  author={ Wenqing Wu and Haixu Xi and Chengzhi Zhang },
  journal={arXiv preprint arXiv:2505.15031},
  year={ 2025 }
}
Comments on this paper