64
v1v2v3 (latest)

PhysUniBench: An Undergraduate-Level Physics Reasoning Benchmark for Multimodal Models

Lintao Wang
Encheng Su
Jiaqi Liu
Pengze Li
Peng Xia
Jiabei Xiao
Wenlong Zhang
Xinnan Dai
Xi Chen
Yuan Meng
Mingyu Ding
Lei Bai
Wanli Ouyang
Shixiang Tang
Aoran Wang
Xinzhu Ma
Main:9 Pages
3 Figures
Bibliography:3 Pages
4 Tables
Appendix:21 Pages
Abstract

Physics problem-solving is a challenging domain for large AI models, requiring integration of conceptual understanding, mathematical reasoning, and interpretation of physical diagrams. Current evaluation methodologies show notable limitations in capturing the breadth and complexity of undergraduate-level physics, underscoring the need for more rigorous assessments. To this end, we present PhysUniBench, a large-scale multimodal benchmark designed to evaluate and improve the reasoning capabilities of multimodal large language models (MLLMs) specifically on undergraduate-level physics problems. PhysUniBench consists of 3,304 physics questions spanning 8 major sub-disciplines of physics, each accompanied by one visual diagrams. The benchmark includes both open-ended and multiple-choice questions, systematically curated and difficulty-rated through an iterative model-in-the-loop process. The benchmark's construction involved a rigorous multi-stage process, including multiple roll-outs, expert-level evaluation, automated filtering of easily solved problems, and a nuanced difficulty grading system with five levels. Through extensive experiments, we observe that current state-of-the-art models encounter substantial challenges in physics reasoning. For example, GPT-4o mini achieves only about 34.2% accuracy in the proposed PhysUniBench. These results highlight that current MLLMs struggle with advanced physics reasoning, especially on multi-step problems and those requiring precise diagram interpretation. By providing a broad and rigorous assessment tool, PhysUniBench aims to drive progress in AI for Science, encouraging the development of models with stronger physical reasoning, problem-solving skills, and multimodal understanding. The benchmark and evaluation scripts are available atthis https URL.

View on arXiv
Comments on this paper