Can Large Language Models Learn Formal Logic? A Data-Driven Training and Evaluation Framework

This paper investigates the logical reasoning capabilities of large language models (LLMs). For a precisely defined yet tractable formulation, we choose the conceptually simple but technically complex task of constructing proofs in Boolean logic. A trained LLM receives as input a set of assumptions and a goal, and produces as output a proof that formally derives the goal from the assumptions. Incorrect proofs are caught by an automated proof checker. A critical obstacle for training is the scarcity of real-world proofs. We propose an efficient, randomized procedure for synthesizing valid proofs and introduce Template Transformation, a data augmentation technique that enhances the model's ability to handle complex logical expressions. The central evaluation question is whether an LLM has indeed learned to reason. We propose tests to measure the reasoning ability of a black-box LLM. By these measures, experiments demonstrate strong reasoning capabilities for assertions with short proofs, which decline with proof complexity. Notably, template transformation improves accuracy even for smaller models, suggesting its effectiveness across model scales.
View on arXiv@article{xia2025_2504.20213, title={ Can Large Language Models Learn Formal Logic? A Data-Driven Training and Evaluation Framework }, author={ Yuan Xia and Akanksha Atrey and Fadoua Khmaissia and Kedar S. Namjoshi }, journal={arXiv preprint arXiv:2504.20213}, year={ 2025 } }