PhysLab: A Benchmark Dataset for Multi-Granularity Visual Parsing of Physics Experiments

Visual parsing of images and videos is critical for a wide range of real-world applications. However, progress in this field is constrained by limitations of existing datasets: (1) insufficient annotation granularity, which impedes fine-grained scene understanding and high-level reasoning; (2) limited coverage of domains, particularly a lack of datasets tailored for educational scenarios; and (3) lack of explicit procedural guidance, with minimal logical rules and insufficient representation of structured task process. To address these gaps, we introduce PhysLab, the first video dataset that captures students conducting complex physics experiments. The dataset includes four representative experiments that feature diverse scientific instruments and rich human-object interaction (HOI) patterns. PhysLab comprises 620 long-form videos and provides multilevel annotations that support a variety of vision tasks, including action recognition, object detection, HOI analysis, etc. We establish strong baselines and perform extensive evaluations to highlight key challenges in the parsing of procedural educational videos. We expect PhysLab to serve as a valuable resource for advancing fine-grained visual parsing, facilitating intelligent classroom systems, and fostering closer integration between computer vision and educational technologies. The dataset and the evaluation toolkit are publicly available atthis https URL.
View on arXiv@article{zou2025_2506.06631, title={ PhysLab: A Benchmark Dataset for Multi-Granularity Visual Parsing of Physics Experiments }, author={ Minghao Zou and Qingtian Zeng and Yongping Miao and Shangkun Liu and Zilong Wang and Hantao Liu and Wei Zhou }, journal={arXiv preprint arXiv:2506.06631}, year={ 2025 } }