33
0

GitGoodBench: A Novel Benchmark For Evaluating Agentic Performance On Git

Main:5 Pages
11 Figures
Bibliography:2 Pages
11 Tables
Appendix:10 Pages
Abstract

Benchmarks for Software Engineering (SE) AI agents, most notably SWE-bench, have catalyzed progress in programming capabilities of AI agents. However, they overlook critical developer workflows such as Version Control System (VCS) operations. To address this issue, we present GitGoodBench, a novel benchmark for evaluating AI agent performance on VCS tasks. GitGoodBench covers three core Git scenarios extracted from permissive open-source Python, Java, and Kotlin repositories. Our benchmark provides three datasets: a comprehensive evaluation suite (900 samples), a rapid prototyping version (120 samples), and a training corpus (17,469 samples). We establish baseline performance on the prototyping version of our benchmark using GPT-4o equipped with custom tools, achieving a 21.11% solve rate overall. We expect GitGoodBench to serve as a crucial stepping stone toward truly comprehensive SE agents that go beyond mere programming.

View on arXiv
@article{lindenbauer2025_2505.22583,
  title={ GitGoodBench: A Novel Benchmark For Evaluating Agentic Performance On Git },
  author={ Tobias Lindenbauer and Egor Bogomolov and Yaroslav Zharov },
  journal={arXiv preprint arXiv:2505.22583},
  year={ 2025 }
}
Comments on this paper