Improving LLM-Generated Code Quality with GRPO
- ALM

Main:6 Pages
5 Figures
Bibliography:4 Pages
2 Tables
Appendix:5 Pages
Abstract
Large Language Models (LLMs) are gaining widespread use for code generation. Recent training procedures use execution feedback as a reward signal, typically focusing on the functional correctness of the code, using unit test pass rate as a reward signal. However, this reward signal fails to capture notions of maintainability, quality and safety of the code produced. We address this under-explored area and develop a comprehensive library to quantify various aspects of code quality, and use it as a reward in GRPO. We find GRPO increases code quality according to this measure, which is confirmed by expert, blinded human annotators.
View on arXiv@article{robeyns2025_2506.02211, title={ Improving LLM-Generated Code Quality with GRPO }, author={ Maxime Robeyns and Laurence Aitchison }, journal={arXiv preprint arXiv:2506.02211}, year={ 2025 } }
Comments on this paper