ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.05714
49
0

Proving the Coding Interview: A Benchmark for Formally Verified Code Generation

8 February 2025
Quinn Dougherty
Ronak Mehta
    ALM
ArXivPDFHTML
Abstract

We introduce the Formally Verified Automated Programming Progress Standards, or FVAPPS, a benchmark of 4715 samples for writing programs and proving their correctness, the largest formal verification benchmark, including 1083 curated and quality controlled samples. Previously, APPS provided a benchmark and dataset for programming puzzles to be completed in Python and checked against unit tests, of the kind seen in technical assessments in the software engineering industry. Building upon recent approaches for benchmarks in interactive theorem proving, we generalize the unit tests to Lean 4 theorems given without proof (i.e., using Lean's "sorry" keyword). On the 406 theorems of 100 randomly selected samples, Sonnet correctly proves 30% and Gemini correctly proves 18%. We challenge the machine learning and program synthesis communities to solve both each general purpose programming problem and its associated correctness specifications. The benchmark is available atthis https URL.

View on arXiv
@article{dougherty2025_2502.05714,
  title={ Proving the Coding Interview: A Benchmark for Formally Verified Code Generation },
  author={ Quinn Dougherty and Ronak Mehta },
  journal={arXiv preprint arXiv:2502.05714},
  year={ 2025 }
}
Comments on this paper