ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16167
41
0

CodeReviewQA: The Code Review Comprehension Assessment for Large Language Models

20 March 2025
Hong Yi Lin
Chunhua Liu
Haoyu Gao
Patanamon Thongtanunam
Christoph Treude
    ELM
ArXivPDFHTML
Abstract

State-of-the-art large language models (LLMs) have demonstrated impressive code generation capabilities but struggle with real-world software engineering tasks, such as revising source code to address code reviews, hindering their practical use. Code review comments are often implicit, ambiguous, and colloquial, requiring models to grasp both code and human intent. This challenge calls for evaluating large language models' ability to bridge both technical and conversational contexts. While existing work has employed the automated code refinement (ACR) task to resolve these comments, current evaluation methods fall short, relying on text matching metrics that provide limited insight into model failures and remain susceptible to training data contamination. To address these limitations, we introduce a novel evaluation benchmark, CodeReviewQA\textbf{CodeReviewQA}CodeReviewQA that enables us to conduct fine-grained assessment of model capabilities and mitigate data contamination risks. In CodeReviewQA, we decompose the generation task of code refinement into three essential reasoning steps\textbf{three essential reasoning steps}three essential reasoning steps: change type recognition\textit{change type recognition}change type recognition (CTR), change localisation\textit{change localisation}change localisation (CL), and solution identification\textit{solution identification}solution identification (SI). Each step is reformulated as multiple-choice questions with varied difficulty levels, enabling precise assessment of model capabilities, while mitigating data contamination risks. Our comprehensive evaluation spans 72 recently released large language models on 900 manually curated, high-quality examples\textbf{900 manually curated, high-quality examples}900 manually curated, high-quality examples across nine programming languages. Our results show that CodeReviewQA is able to expose specific model weaknesses in code review comprehension, disentangled from their generative automated code refinement results.

View on arXiv
@article{lin2025_2503.16167,
  title={ CodeReviewQA: The Code Review Comprehension Assessment for Large Language Models },
  author={ Hong Yi Lin and Chunhua Liu and Haoyu Gao and Patanamon Thongtanunam and Christoph Treude },
  journal={arXiv preprint arXiv:2503.16167},
  year={ 2025 }
}
Comments on this paper