63

RankLLM: Weighted Ranking of LLMs by Quantifying Question Difficulty

Ziqian Zhang
Xingjian Hu
Yue Huang
Kai Zhang
Ruoxi Chen
Yixin Liu
Qingsong Wen
Kaidi Xu
Xiangliang Zhang
Neil Zhenqiang Gong
Lichao Sun
Main:12 Pages
14 Figures
Bibliography:2 Pages
16 Tables
Appendix:19 Pages
Abstract

Benchmarks establish a standardized evaluation framework to systematically assess the performance of large language models (LLMs), facilitating objective comparisons and driving advancements in the field. However, existing benchmarks fail to differentiate question difficulty, limiting their ability to effectively distinguish models' capabilities. To address this limitation, we propose RankLLM, a novel framework designed to quantify both question difficulty and model competency. RankLLM introduces difficulty as the primary criterion for differentiation, enabling a more fine-grained evaluation of LLM capabilities. RankLLM's core mechanism facilitates bidirectional score propagation between models and questions. The core intuition of RankLLM is that a model earns a competency score when it correctly answers a question, while a question's difficulty score increases when it challenges a model. Using this framework, we evaluate 30 models on 35,550 questions across multiple domains. RankLLM achieves 90% agreement with human judgments and consistently outperforms strong baselines such as IRT. It also exhibits strong stability, fast convergence, and high computational efficiency, making it a practical solution for large-scale, difficulty-aware LLM evaluation.

View on arXiv
Comments on this paper