ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20737
37
0

RRO: LLM Agent Optimization Through Rising Reward Trajectories

27 May 2025
Zilong Wang
Jingfeng Yang
Sreyashi Nag
Samarth Varshney
Xianfeng Tang
Haoming Jiang
Jingbo Shang
Sheikh Sarwar
    LRM
ArXiv (abs)PDFHTML
Main:5 Pages
2 Figures
2 Tables
Appendix:6 Pages
Abstract

Large language models (LLMs) have exhibited extraordinary performance in a variety of tasks while it remains challenging for them to solve complex multi-step tasks as agents. In practice, agents sensitive to the outcome of certain key steps which makes them likely to fail the task because of a subtle mistake in the planning trajectory. Recent approaches resort to calibrating the reasoning process through reinforcement learning. They reward or penalize every reasoning step with process supervision, as known as Process Reward Models (PRMs). However, PRMs are difficult and costly to scale up with a large number of next action candidates since they require extensive computations to acquire the training data through the per-step trajectory exploration. To mitigate this issue, we focus on the relative reward trend across successive reasoning steps and propose maintaining an increasing reward in the collected trajectories for process supervision, which we term Reward Rising Optimization (RRO). Specifically, we incrementally augment the process supervision until identifying a step exhibiting positive reward differentials, i.e. rising rewards, relative to its preceding iteration. This method dynamically expands the search space for the next action candidates, efficiently capturing high-quality data. We provide mathematical groundings and empirical results on the WebShop and InterCode-SQL benchmarks, showing that our proposed RRO achieves superior performance while requiring much less exploration cost.

View on arXiv
@article{wang2025_2505.20737,
  title={ RRO: LLM Agent Optimization Through Rising Reward Trajectories },
  author={ Zilong Wang and Jingfeng Yang and Sreyashi Nag and Samarth Varshney and Xianfeng Tang and Haoming Jiang and Jingbo Shang and Sheikh Muhammad Sarwar },
  journal={arXiv preprint arXiv:2505.20737},
  year={ 2025 }
}
Comments on this paper