SRPO: A Cross-Domain Implementation of Large-Scale Reinforcement Learning on LLM

Recent advances of reasoning models, exemplified by OpenAI's o1 and DeepSeek's R1, highlight the significant potential of Reinforcement Learning (RL) to enhance the reasoning capabilities of Large Language Models (LLMs). However, replicating these advancements across diverse domains remains challenging due to limited methodological transparency. In this work, we present two-Staged history-Resampling Policy Optimization (SRPO), which surpasses the performance of DeepSeek-R1-Zero-32B on the AIME24 and LiveCodeBench benchmarks. SRPO achieves this using the same base model as DeepSeek (i.e. Qwen2.5-32B), using only about 1/10 of the training steps required by DeepSeek-R1-Zero-32B, demonstrating superior efficiency. Building upon Group Relative Policy Optimization (GRPO), we introduce two key methodological innovations: (1) a two-stage cross-domain training paradigm designed to balance the development of mathematical reasoning and coding proficiency, and (2) History Resampling (HR), a technique to address ineffective samples. Our comprehensive experiments validate the effectiveness of our approach, offering valuable insights into scaling LLM reasoning capabilities across diverse tasks.
View on arXiv@article{zhang2025_2504.14286, title={ SRPO: A Cross-Domain Implementation of Large-Scale Reinforcement Learning on LLM }, author={ Xiaojiang Zhang and Jinghui Wang and Zifei Cheng and Wenhao Zhuang and Zheng Lin and Minglei Zhang and Shaojie Wang and Yinghan Cui and Chao Wang and Junyi Peng and Shimiao Jiang and Shiqi Kuang and Shouyu Yin and Chaohang Wen and Haotian Zhang and Bin Chen and Bing Yu }, journal={arXiv preprint arXiv:2504.14286}, year={ 2025 } }