20

ResT: Reshaping Token-Level Policy Gradients for Tool-Use Large Language Models

Main:9 Pages
4 Figures
Bibliography:3 Pages
6 Tables
Appendix:11 Pages
Abstract

Large language models (LLMs) transcend passive generation and act as goal-directed agents by invoking external tools. Reinforcement learning (RL) offers a principled framework for optimizing these emergent tool-use policies, yet the prevailing paradigm relies exclusively on sparse outcome rewards and lacks consideration of the particularity of tool-use tasks, inflating policy-gradient variance and resulting in inefficient training. To better understand and address these challenges, we first establish a theoretical link between policy entropy and training stability of tool-use tasks, which reveals that structured, low-entropy tokens are primary determinants of rewards. Motivated by this insight, we propose \textbf{Res}haped \textbf{T}oken-level policy gradients (\textbf{ResT}) for tool-use tasks. ResT reshapes the policy gradient through entropy-informed token reweighting, progressively upweighting reasoning tokens as training proceeds. This entropy-aware scheme enables a smooth shift from structural correctness to semantic reasoning and stabilizes convergence in multi-turn tool-use tasks. Evaluation on BFCL and API-Bank shows that ResT achieves state-of-the-art results, outperforming prior methods by up to 8.76%8.76\%. When fine-tuned on a 4B base LLM, ResT further surpasses GPT-4o by 4.11%4.11\% on single-turn tasks and 1.50%1.50\% on multi-turn base tasks.

View on arXiv
Comments on this paper