99
0

A Position Paper on the Automatic Generation of Machine Learning Leaderboards

Abstract

An important task in machine learning (ML) research is comparing prior work, which is often performed via ML leaderboards: a tabular overview of experiments with comparable conditions (e.g., same task, dataset, and metric). However, the growing volume of literature creates challenges in creating and maintaining these leaderboards. To ease this burden, researchers have developed methods to extract leaderboard entries from research papers for automated leaderboard curation. Yet, prior work varies in problem framing, complicating comparisons and limiting real-world applicability. In this position paper, we present the first overview of Automatic Leaderboard Generation (ALG) research, identifying fundamental differences in assumptions, scope, and output formats. We propose an ALG unified conceptual framework to standardise how the ALG task is defined. We offer ALG benchmarking guidelines, including recommendations for datasets and metrics that promote fair, reproducible evaluation. Lastly, we outline challenges and new directions for ALG, such as, advocating for broader coverage by including all reported results and richer metadata.

View on arXiv
@article{timmer2025_2505.17465,
  title={ A Position Paper on the Automatic Generation of Machine Learning Leaderboards },
  author={ Roelien C Timmer and Yufang Hou and Stephen Wan },
  journal={arXiv preprint arXiv:2505.17465},
  year={ 2025 }
}
Comments on this paper