Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning

Policy-based methods currently dominate reinforcement learning (RL) pipelines for large language model (LLM) reasoning, leaving value-based approaches largely unexplored. We revisit the classical paradigm of Bellman Residual Minimization and introduce Trajectory Bellman Residual Minimization (TBRM), an algorithm that naturally adapts this idea to LLMs, yielding a simple yet effective off-policy algorithm that optimizes a single trajectory-level Bellman objective using the model's own logits as -values. TBRM removes the need for critics, importance-sampling ratios, or clipping, and operates with only one rollout per prompt. We prove convergence to the near-optimal KL-regularized policy from arbitrary off-policy data via an improved change-of-trajectory-measure analysis. Experiments on standard mathematical-reasoning benchmarks show that TBRM consistently outperforms policy-based baselines, like PPO and GRPO, with comparable or lower computational and memory overhead. Our results indicate that value-based RL might be a principled and efficient alternative for enhancing reasoning capabilities in LLMs.
View on arXiv@article{yuan2025_2505.15311, title={ Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning }, author={ Yurun Yuan and Fan Chen and Zeyu Jia and Alexander Rakhlin and Tengyang Xie }, journal={arXiv preprint arXiv:2505.15311}, year={ 2025 } }