2
0

Towards Reliable Proof Generation with LLMs: A Neuro-Symbolic Approach

Abstract

Large language models (LLMs) struggle with formal domains that require rigorous logical deduction and symbolic reasoning, such as mathematical proof generation. We propose a neuro-symbolic approach that combines LLMs' generative strengths with structured components to overcome this challenge. As a proof-of-concept, we focus on geometry problems. Our approach is two-fold: (1) we retrieve analogous problems and use their proofs to guide the LLM, and (2) a formal verifier evaluates the generated proofs and provides feedback, helping the model fix incorrect proofs. We demonstrate that our method significantly improves proof accuracy for OpenAI's o1 model (58%-70% improvement); both analogous problems and the verifier's feedback contribute to these gains. More broadly, shifting to LLMs that generate provably correct conclusions could dramatically improve their reliability, accuracy and consistency, unlocking complex tasks and critical real-world applications that require trustworthiness.

View on arXiv
@article{sultan2025_2505.14479,
  title={ Towards Reliable Proof Generation with LLMs: A Neuro-Symbolic Approach },
  author={ Oren Sultan and Eitan Stern and Dafna Shahaf },
  journal={arXiv preprint arXiv:2505.14479},
  year={ 2025 }
}
Comments on this paper