SCoRE: Benchmarking Long-Chain Reasoning in Commonsense Scenarios

Currently, long-chain reasoning remains a key challenge for large language models (LLMs) because natural texts lack sufficient explicit reasoning data. However, existing benchmarks suffer from limitations such as narrow coverage, short reasoning paths, or high construction costs. We introduce SCoRE (Scenario-based Commonsense Reasoning Evaluation), a benchmark that synthesizes multi-hop questions from scenario schemas of entities, relations, and logical rules to assess long-chain commonsense reasoning. SCoRE contains 100k bilingual (Chinese-English) multiple-choice questions whose reasoning chains span 2-11 hops and are grouped into various difficulty levels. Each question is accompanied by fine-grained knowledge labels, explicit reasoning chains, and difficulty levels for diagnostic evaluation. Evaluation results on cutting-edge LLMs such as o3-mini and Deepseek R1 shows that even the best model attains only 69.78% accuracy on SCoRE (even only 47.91% on the hard set), with errors often stemming from rare knowledge, logical inconsistency, and over-interpretation of simple questions. SCoRE offers a scalable, extensible framework for evaluating and diagnosing the long-chain commonsense reasoning abilities of LLMs and guiding future advances in model design and training.
View on arXiv@article{zhan2025_2503.06218, title={ SCoRE: Benchmarking Long-Chain Reasoning in Commonsense Scenarios }, author={ Weidong Zhan and Yue Wang and Nan Hu and Liming Xiao and Jingyuan Ma and Yuhang Qin and Zheng Li and Yixin Yang and Sirui Deng and Jinkun Ding and Wenhan Ma and Rui Li and Weilin Luo and Qun Liu and Zhifang Sui }, journal={arXiv preprint arXiv:2503.06218}, year={ 2025 } }