ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01096
27
0

SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning

1 June 2025
Yihao Liu
Shuocheng Li
Lang Cao
Yuhang Xie
Mengyu Zhou
Haoyu Dong
Xiaojun Ma
Shi Han
Dongmei Zhang
    OffRLReLMLRM
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:3 Pages
12 Tables
Appendix:18 Pages
Abstract

Large language models are increasingly used for complex reasoning tasks where high-quality offline data such as expert-annotated solutions and distilled reasoning traces are often available. However, in environments with sparse rewards, reinforcement learning struggles to sample successful trajectories, leading to inefficient learning. At the same time, these offline trajectories that represent correct reasoning paths are not utilized by standard on-policy reinforcement learning methods. To address this limitation, we propose SuperRL, a unified training framework that adaptively incorporates offline supervision into reinforcement learning. SuperRL introduces an Adaptive Switch to detect sparse reward conditions and activates a Hybrid Actor when necessary. The Hybrid Actor integrates policy gradient and supervised learning objectives at the loss level, enabling the model to benefit from accurate offline reasoning signals while maintaining the exploratory capacity of reinforcement learning. Experiments on a range of reasoning benchmarks show that SuperRL consistently outperforms standard reinforcement learning by improving sample efficiency, generalization, and robustness under sparse rewards.

View on arXiv
@article{liu2025_2506.01096,
  title={ SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning },
  author={ Yihao Liu and Shuocheng Li and Lang Cao and Yuhang Xie and Mengyu Zhou and Haoyu Dong and Xiaojun Ma and Shi Han and Dongmei Zhang },
  journal={arXiv preprint arXiv:2506.01096},
  year={ 2025 }
}
Comments on this paper