105
1

Dissecting Logical Reasoning in LLMs: A Fine-Grained Evaluation and Supervision Study

Abstract

Logical reasoning is a core capability for many applications of large language models (LLMs), yet existing benchmarks often rely solely on final-answer accuracy, failing to capture the quality and structure of the reasoning process. We propose FineLogic, a fine-grained evaluation framework that assesses logical reasoning across three dimensions: overall benchmark accuracy, stepwise soundness, and representation-level alignment. In addition, to better understand how reasoning capabilities emerge, we conduct a comprehensive study on the effects of supervision format during fine-tuning. We construct four supervision styles (one natural language and three symbolic variants) and train LLMs under each. Our findings reveal that natural language supervision yields strong generalization even on out-of-distribution and long-context tasks, while symbolic reasoning styles promote more structurally sound and atomic inference chains. Further, our representation-level probing shows that fine-tuning primarily improves reasoning behaviors through step-by-step generation, rather than enhancing shortcut prediction or internalized correctness. Together, our framework and analysis provide a more rigorous and interpretable lens for evaluating and improving logical reasoning in LLMs.

View on arXiv
@article{zhou2025_2506.04810,
  title={ Dissecting Logical Reasoning in LLMs: A Fine-Grained Evaluation and Supervision Study },
  author={ Yujun Zhou and Jiayi Ye and Zipeng Ling and Yufei Han and Yue Huang and Haomin Zhuang and Zhenwen Liang and Kehan Guo and Taicheng Guo and Xiangqi Wang and Xiangliang Zhang },
  journal={arXiv preprint arXiv:2506.04810},
  year={ 2025 }
}
Comments on this paper