52
12

Teaching Language Models to Critique via Reinforcement Learning

Abstract

Teaching large language models (LLMs) to critique and refine their outputs is crucial for building systems that can iteratively improve, yet it is fundamentally limited by the ability to provide accurate judgments and actionable suggestions. In this work, we study LLM critics for code generation and propose CTRL\texttt{CTRL}, a framework for C\texttt{C}ritic T\texttt{T}raining via R\texttt{R}einforcement L\texttt{L}earning, which trains a critic model to generate feedback that maximizes correction performance for a fixed generator model without human supervision. Our results demonstrate that critics trained with CTRL\texttt{CTRL} significantly enhance pass rates and mitigate compounding errors across both base and stronger generator models. Furthermore, we show that these critic models act as accurate generative reward models and enable test-time scaling through iterative critique-revision, achieving up to 106.1% relative improvements across challenging code generation benchmarks.

View on arXiv
@article{xie2025_2502.03492,
  title={ Teaching Language Models to Critique via Reinforcement Learning },
  author={ Zhihui Xie and Jie chen and Liyu Chen and Weichao Mao and Jingjing Xu and Lingpeng Kong },
  journal={arXiv preprint arXiv:2502.03492},
  year={ 2025 }
}
Comments on this paper