ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24823
30
0

PhySense: Principle-Based Physics Reasoning Benchmarking for Large Language Models

30 May 2025
Yinggan Xu
Yue Liu
Zhiqiang Gao
Changnan Peng
Di Luo
    LRM
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:4 Pages
4 Tables
Appendix:7 Pages
Abstract

Large language models (LLMs) have rapidly advanced and are increasingly capable of tackling complex scientific problems, including those in physics. Despite this progress, current LLMs often fail to emulate the concise, principle-based reasoning characteristic of human experts, instead generating lengthy and opaque solutions. This discrepancy highlights a crucial gap in their ability to apply core physical principles for efficient and interpretable problem solving. To systematically investigate this limitation, we introduce PhySense, a novel principle-based physics reasoning benchmark designed to be easily solvable by experts using guiding principles, yet deceptively difficult for LLMs without principle-first reasoning. Our evaluation across multiple state-of-the-art LLMs and prompt types reveals a consistent failure to align with expert-like reasoning paths, providing insights for developing AI systems with efficient, robust and interpretable principle-based scientific reasoning.

View on arXiv
@article{xu2025_2505.24823,
  title={ PhySense: Principle-Based Physics Reasoning Benchmarking for Large Language Models },
  author={ Yinggan Xu and Yue Liu and Zhiqiang Gao and Changnan Peng and Di Luo },
  journal={arXiv preprint arXiv:2505.24823},
  year={ 2025 }
}
Comments on this paper