41
0

Hierarchical Attention Generates Better Proofs

Main:7 Pages
3 Figures
Bibliography:5 Pages
15 Tables
Appendix:3 Pages
Abstract

Large language models (LLMs) have shown promise in formal theorem proving, but their token-level processing often fails to capture the inherent hierarchical nature of mathematical proofs. We introduce \textbf{Hierarchical Attention}, a regularization method that aligns LLMs' attention mechanisms with mathematical reasoning structures. Our approach establishes a five-level hierarchy from foundational elements to high-level concepts, ensuring structured information flow in proof generation. Experiments demonstrate that our method improves proof success rates by 2.05\% on miniF2F and 1.69\% on ProofNet while reducing proof complexity by 23.81\% and 16.50\% respectively. The code is available atthis https URL.

View on arXiv
@article{chen2025_2504.19188,
  title={ Hierarchical Attention Generates Better Proofs },
  author={ Jianlong Chen and Chao Li and Yang Yuan and Andrew C Yao },
  journal={arXiv preprint arXiv:2504.19188},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.