ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.13430
19
4

The ICML 2023 Ranking Experiment: Examining Author Self-Assessment in ML/AI Peer Review

24 August 2024
Buxin Su
Jiayao Zhang
Natalie Collina
Yuling Yan
Didong Li
Kyunghyun Cho
Jianqing Fan
Aaron Roth
Weijie J. Su
ArXivPDFHTML
Abstract

We conducted an experiment during the review process of the 2023 International Conference on Machine Learning (ICML), asking authors with multiple submissions to rank their papers based on perceived quality. In total, we received 1,342 rankings, each from a different author, covering 2,592 submissions. In this paper, we present an empirical analysis of how author-provided rankings could be leveraged to improve peer review processes at machine learning conferences. We focus on the Isotonic Mechanism, which calibrates raw review scores using the author-provided rankings. Our analysis shows that these ranking-calibrated scores outperform the raw review scores in estimating the ground truth ``expected review scores'' in terms of both squared and absolute error metrics. Furthermore, we propose several cautious, low-risk applications of the Isotonic Mechanism and author-provided rankings in peer review, including supporting senior area chairs in overseeing area chairs' recommendations, assisting in the selection of paper awards, and guiding the recruitment of emergency reviewers.

View on arXiv
@article{su2025_2408.13430,
  title={ The ICML 2023 Ranking Experiment: Examining Author Self-Assessment in ML/AI Peer Review },
  author={ Buxin Su and Jiayao Zhang and Natalie Collina and Yuling Yan and Didong Li and Kyunghyun Cho and Jianqing Fan and Aaron Roth and Weijie Su },
  journal={arXiv preprint arXiv:2408.13430},
  year={ 2025 }
}
Comments on this paper