ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19612
42
2

RL-finetuning LLMs from on- and off-policy data with a single algorithm

25 March 2025
Yunhao Tang
Taco Cohen
David W. Zhang
Michal Valko
Rémi Munos
    OffRL
ArXivPDFHTML
Abstract

We introduce a novel reinforcement learning algorithm (AGRO, for Any-Generation Reward Optimization) for fine-tuning large-language models. AGRO leverages the concept of generation consistency, which states that the optimal policy satisfies the notion of consistency across any possible generation of the model. We derive algorithms that find optimal solutions via the sample-based policy gradient and provide theoretical guarantees on their convergence. Our experiments demonstrate the effectiveness of AGRO in both on-policy and off-policy settings, showing improved performance on the mathematical reasoning dataset over baseline algorithms.

View on arXiv
@article{tang2025_2503.19612,
  title={ RL-finetuning LLMs from on- and off-policy data with a single algorithm },
  author={ Yunhao Tang and Taco Cohen and David W. Zhang and Michal Valko and Rémi Munos },
  journal={arXiv preprint arXiv:2503.19612},
  year={ 2025 }
}
Comments on this paper