30
0

Training Language Models to Generate Quality Code with Program Analysis Feedback

Main:8 Pages
3 Figures
Bibliography:3 Pages
5 Tables
Appendix:2 Pages
Abstract

Code generation with large language models (LLMs), often termed vibe coding, is increasingly adopted in production but fails to ensure code quality, particularly in security (e.g., SQL injection vulnerabilities) and maintainability (e.g., missing type annotations). Existing methods, such as supervised fine-tuning and rule-based post-processing, rely on labor-intensive annotations or brittle heuristics, limiting their scalability and effectiveness. We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code using program analysis-guided feedback. Specifically, REAL integrates two automated signals: (1) program analysis detecting security or maintainability defects and (2) unit tests ensuring functional correctness. Unlike prior work, our framework is prompt-agnostic and reference-free, enabling scalable supervision without manual intervention. Experiments across multiple datasets and model scales demonstrate that REAL outperforms state-of-the-art methods in simultaneous assessments of functionality and code quality. Our work bridges the gap between rapid prototyping and production-ready code, enabling LLMs to deliver both speed and quality.

View on arXiv
@article{yao2025_2505.22704,
  title={ Training Language Models to Generate Quality Code with Program Analysis Feedback },
  author={ Feng Yao and Zilong Wang and Liyuan Liu and Junxia Cui and Li Zhong and Xiaohan Fu and Haohui Mai and Vish Krishnan and Jianfeng Gao and Jingbo Shang },
  journal={arXiv preprint arXiv:2505.22704},
  year={ 2025 }
}
Comments on this paper