ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07527
15
0

Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions

9 June 2025
Lu Ma
Hao Liang
Meiyi Qiang
Lexiang Tang
Xiaochen Ma
Zhen Hao Wong
Junbo Niu
Chengyu Shen
Runming He
Bin Cui
Wentao Zhang
    ReLMOffRLLRM
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:3 Pages
3 Tables
Appendix:1 Pages
Abstract

Recent advances in large language model (LLM) reasoning have shown that sophisticated behaviors such as planning and self-reflection can emerge through reinforcement learning (RL). However, despite these successes, RL in its current form remains insufficient to induce capabilities that exceed the limitations of the base model, as it is primarily optimized based on existing knowledge of the model rather than facilitating the acquisition of new information. To address this limitation, we employ supervised fine-tuning (SFT) to learn what RL cannot, which enables the incorporation of new knowledge and reasoning patterns by leveraging high-quality demonstration data. We analyze the training dynamics of RL and SFT for LLM reasoning and find that RL excels at maintaining and improving performance on questions within the model's original capabilities, while SFT is more effective at enabling progress on questions beyond the current scope of the model. Motivated by the complementary strengths of RL and SFT, we introduce a novel training approach, \textbf{ReLIFT} (\textbf{Re}inforcement \textbf{L}earning \textbf{I}nterleaved with Online \textbf{F}ine-\textbf{T}uning). In ReLIFT, the model is primarily trained using RL, but when it encounters challenging questions, high-quality solutions are collected for fine-tuning, and the training process alternates between RL and fine-tuning to enhance the model's reasoning abilities. ReLIFT achieves an average improvement of over +5.2 points across five competition-level benchmarks and one out-of-distribution benchmark compared to other zero-RL models. Furthermore, we demonstrate that ReLIFT outperforms both RL and SFT while using only 13\% of the detailed demonstration data, highlighting its scalability. These results provide compelling evidence that ReLIFT overcomes the fundamental limitations of RL and underscores the significant potential.

View on arXiv
@article{ma2025_2506.07527,
  title={ Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions },
  author={ Lu Ma and Hao Liang and Meiyi Qiang and Lexiang Tang and Xiaochen Ma and Zhen Hao Wong and Junbo Niu and Chengyu Shen and Runming He and Bin Cui and Wentao Zhang },
  journal={arXiv preprint arXiv:2506.07527},
  year={ 2025 }
}
Comments on this paper