62
0

Accelerating RL for LLM Reasoning with Optimal Advantage Regression

Main:11 Pages
10 Figures
Bibliography:4 Pages
4 Tables
Appendix:18 Pages
Abstract

Reinforcement learning (RL) has emerged as a powerful tool for fine-tuning large language models (LLMs) to improve complex reasoning abilities. However, state-of-the-art policy optimization methods often suffer from high computational overhead and memory consumption, primarily due to the need for multiple generations per prompt and the reliance on critic networks or advantage estimates of the current policy. In this paper, we propose AA*-PO, a novel two-stage policy optimization framework that directly approximates the optimal advantage function and enables efficient training of LLMs for reasoning tasks. In the first stage, we leverage offline sampling from a reference policy to estimate the optimal value function VV*, eliminating the need for costly online value estimation. In the second stage, we perform on-policy updates using a simple least-squares regression loss with only a single generation per prompt. Theoretically, we establish performance guarantees and prove that the KL-regularized RL objective can be optimized without requiring complex exploration strategies. Empirically, AA*-PO achieves competitive performance across a wide range of mathematical reasoning benchmarks, while reducing training time by up to 2×\times and peak memory usage by over 30% compared to PPO, GRPO, and REBEL. Implementation of AA*-PO can be found atthis https URL.

View on arXiv
@article{brantley2025_2505.20686,
  title={ Accelerating RL for LLM Reasoning with Optimal Advantage Regression },
  author={ Kianté Brantley and Mingyu Chen and Zhaolin Gao and Jason D. Lee and Wen Sun and Wenhao Zhan and Xuezhou Zhang },
  journal={arXiv preprint arXiv:2505.20686},
  year={ 2025 }
}
Comments on this paper