44

Evaluating Large Language Models in Scientific Discovery

Zhangde Song
Jieyu Lu
Yuanqi Du
Botao Yu
Thomas M. Pruyn
Yue Huang
Kehan Guo
Xiuzhe Luo
Yuanhao Qu
Yi Qu
Yinkai Wang
Haorui Wang
Jeff Guo
Jingru Gan
Parshin Shojaee
Di Luo
Andres M Bran
Gen Li
Qiyuan Zhao
Shao-Xiong Lennon Luo
Yuxuan Zhang
Xiang Zou
Wanru Zhao
Yifan F. Zhang
Wucheng Zhang
Shunan Zheng
Saiyang Zhang
Sartaaj Takrim Khan
Mahyar Rajabi-Kochi
Samantha Paradi-Maropakis
Tony Baltoiu
Fengyu Xie
Tianyang Chen
Kexin Huang
Weiliang Luo
Meijing Fang
Xin Yang
Lixue Cheng
Jiajun He
Soha Hassoun
Xiangliang Zhang
Wei Wang
Chandan K. Reddy
Chao Zhang
Zhiling Zheng
Mengdi Wang
Le Cong
Carla P. Gomes
Chang-Yu Hsieh
Aditya Nandy
Philippe Schwaller
Heather J. Kulik
Haojun Jia
Huan Sun
Seyed Mohamad Moosavi
Chenru Duan
Main:23 Pages
25 Figures
Bibliography:1 Pages
7 Tables
Appendix:30 Pages
Abstract

Large language models (LLMs) are increasingly applied to scientific research, yet prevailing science benchmarks probe decontextualized knowledge and overlook the iterative reasoning, hypothesis generation, and observation interpretation that drive scientific discovery. We introduce a scenario-grounded benchmark that evaluates LLMs across biology, chemistry, materials, and physics, where domain experts define research projects of genuine interest and decompose them into modular research scenarios from which vetted questions are sampled. The framework assesses models at two levels: (i) question-level accuracy on scenario-tied items and (ii) project-level performance, where models must propose testable hypotheses, design simulations or experiments, and interpret results. Applying this two-phase scientific discovery evaluation (SDE) framework to state-of-the-art LLMs reveals a consistent performance gap relative to general science benchmarks, diminishing return of scaling up model sizes and reasoning, and systematic weaknesses shared across top-tier models from different providers. Large performance variation in research scenarios leads to changing choices of the best performing model on scientific discovery projects evaluated, suggesting all current LLMs are distant to general scientific "superintelligence". Nevertheless, LLMs already demonstrate promise in a great variety of scientific discovery projects, including cases where constituent scenario scores are low, highlighting the role of guided exploration and serendipity in discovery. This SDE framework offers a reproducible benchmark for discovery-relevant evaluation of LLMs and charts practical paths to advance their development toward scientific discovery.

View on arXiv
Comments on this paper