OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics
- LRM

As models become increasingly sophisticated, conventional algorithm benchmarks are increasingly saturated, underscoring the need for more challenging benchmarks to guide future improvements in algorithmic reasoning. This paper introduces OIBench, a high-quality, private, and challenging olympiad-level informatics dataset comprising 250 carefully curated original problems. We detail the construction methodology of the benchmark, ensuring a comprehensive assessment across various programming paradigms and complexities, and we demonstrate its contamination-resistant properties via experiments. We propose Time/Space Completion Curves for finer-grained efficiency analysis and enable direct human-model comparisons through high-level participant evaluations. Our experiments reveal that while open-source models lag behind closed-source counterparts, current SOTA models already outperform most human participants in both correctness and efficiency, while still being suboptimal compared to the canonical solutions. By releasing OIBench as a fully open-source resource (this https URL), we hope this benchmark will contribute to advancing code reasoning capabilities for future LLMs.
View on arXiv@article{zhu2025_2506.10481, title={ OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics }, author={ Yaoming Zhu and Junxin Wang and Yiyang Li and Lin Qiu and ZongYu Wang and Jun Xu and Xuezhi Cao and Yuhuai Wei and Mingshi Wang and Xunliang Cai and Rong Ma }, journal={arXiv preprint arXiv:2506.10481}, year={ 2025 } }