2
0

LLM-based Automated Theorem Proving Hinges on Scalable Synthetic Data Generation

Abstract

Recent advancements in large language models (LLMs) have sparked considerable interest in automated theorem proving and a prominent line of research integrates stepwise LLM-based provers into tree search. In this paper, we introduce a novel proof-state exploration approach for training data synthesis, designed to produce diverse tactics across a wide range of intermediate proof states, thereby facilitating effective one-shot fine-tuning of LLM as the policy model. We also propose an adaptive beam size strategy, which effectively takes advantage of our data synthesis method and achieves a trade-off between exploration and exploitation during tree search. Evaluations on the MiniF2F and ProofNet benchmarks demonstrate that our method outperforms strong baselines under the stringent Pass@1 metric, attaining an average pass rate of 60.74%60.74\% on MiniF2F and 21.18%21.18\% on ProofNet. These results underscore the impact of large-scale synthetic data in advancing automated theorem proving.

View on arXiv
@article{lai2025_2505.12031,
  title={ LLM-based Automated Theorem Proving Hinges on Scalable Synthetic Data Generation },
  author={ Junyu Lai and Jiakun Zhang and Shuo Xu and Taolue Chen and Zihang Wang and Yao Yang and Jiarui Zhang and Chun Cao and Jingwei Xu },
  journal={arXiv preprint arXiv:2505.12031},
  year={ 2025 }
}
Comments on this paper