15
0

ResearchCodeBench: Benchmarking LLMs on Implementing Novel Machine Learning Research Code

Main:9 Pages
7 Figures
Bibliography:3 Pages
2 Tables
Appendix:9 Pages
Abstract

Large language models (LLMs) have shown promise in transforming machine learning research, yet their capability to faithfully implement novel ideas from recent research papers-ideas unseen during pretraining-remains unclear. We introduce ResearchCodeBench, a benchmark of 212 coding challenges that evaluates LLMs' ability to translate cutting-edge ML contributions from top 2024-2025 research papers into executable code. We assessed 30+ proprietary and open-source LLMs, finding that even the best models correctly implement less than 40% of the code. We find Gemini-2.5-Pro-Preview to perform best at 37.3% success rate, with O3 (High) and O4-mini (High) following behind at 32.3% and 30.8% respectively. We present empirical findings on performance comparison, contamination, and error patterns. By providing a rigorous and community-driven evaluation platform, ResearchCodeBench enables continuous understanding and advancement of LLM-driven innovation in research code generation.

View on arXiv
@article{hua2025_2506.02314,
  title={ ResearchCodeBench: Benchmarking LLMs on Implementing Novel Machine Learning Research Code },
  author={ Tianyu Hua and Harper Hua and Violet Xiang and Benjamin Klieger and Sang T. Truong and Weixin Liang and Fan-Yun Sun and Nick Haber },
  journal={arXiv preprint arXiv:2506.02314},
  year={ 2025 }
}
Comments on this paper