106

CodeSimpleQA: Scaling Factuality in Code Large Language Models

Jian Yang
Wei Zhang
Yizhi Li
Shawn Guo
Haowen Wang
Aishan Liu
Ge Zhang
Zili Wang
Zhoujun Li
Xianglong Liu
Weifeng Lv
Main:7 Pages
11 Figures
Bibliography:3 Pages
4 Tables
Appendix:3 Pages
Abstract

Large language models (LLMs) have made significant strides in code generation, achieving impressive capabilities in synthesizing code snippets from natural language instructions. However, a critical challenge remains in ensuring LLMs generate factually accurate responses about programming concepts, technical implementations, etc. Most previous code-related benchmarks focus on code execution correctness, overlooking the factual accuracy of programming knowledge. To address this gap, we present CodeSimpleQA, a comprehensive bilingual benchmark designed to evaluate the factual accuracy of code LLMs in answering code-related questions, which contains carefully curated question-answer pairs in both English and Chinese, covering diverse programming languages and major computer science domains. Further, we create CodeSimpleQA-Instruct, a large-scale instruction corpus with 66M samples, and develop a post-training framework combining supervised fine-tuning and reinforcement learning. Our comprehensive evaluation of diverse LLMs reveals that even frontier LLMs struggle with code factuality. Our proposed framework demonstrates substantial improvements over the base model, underscoring the critical importance of factuality-aware alignment in developing reliable code LLMs.

View on arXiv
Comments on this paper