CLEVER: A Curated Benchmark for Formally Verified Code Generation

We introduce , a high-quality, curated benchmark of 161 problems for end-to-end verified code generation in Lean. Each problem consists of (1) the task of generating a specification that matches a held-out ground-truth specification, and (2) the task of generating a Lean implementation that provably satisfies this specification. Unlike prior benchmarks, avoids test-case supervision, LLM-generated annotations, and specifications that leak implementation logic or allow vacuous solutions. All outputs are verified post-hoc using Lean's type checker to ensure machine-checkable correctness. We use to evaluate several few-shot and agentic approaches based on state-of-the-art language models. These methods all struggle to achieve full verification, establishing it as a challenging frontier benchmark for program synthesis and formal reasoning. Our benchmark can be found on GitHub(this https URL) as well as HuggingFace(this https URL). All our evaluation code is also available online(this https URL).
View on arXiv@article{thakur2025_2505.13938, title={ CLEVER: A Curated Benchmark for Formally Verified Code Generation }, author={ Amitayush Thakur and Jasper Lee and George Tsoukalas and Meghana Sistla and Matthew Zhao and Stefan Zetzsche and Greg Durrett and Yisong Yue and Swarat Chaudhuri }, journal={arXiv preprint arXiv:2505.13938}, year={ 2025 } }