43
0
v1v2 (latest)

AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning

Main:8 Pages
8 Figures
Bibliography:7 Pages
5 Tables
Appendix:3 Pages
Abstract

Reinforcement learning (RL) has become a dominant paradigm for training large language models (LLMs), particularly for reasoning tasks. Effective RL for LLMs requires massive parallelization and poses an urgent need for efficient training systems. Most existing large-scale RL systems for LLMs are synchronous, alternating generation and training in a batch setting where rollouts in each training batch are generated by the same model. This approach stabilizes RL training but suffers from severe system-level inefficiency: generation must wait until the longest output in the batch is completed before model updates, resulting in GPU underutilization. We present AReaL, a fully asynchronous RL system that completely decouples generation from training. Rollout workers in AReaL continuously generate new outputs without waiting, while training workers update the model whenever a batch of data is collected. AReaL also incorporates a collection of system-level optimizations, leading to substantially higher GPU utilization. To stabilize RL training, AReaL balances the workload of rollout and training workers to control data staleness, and adopts a staleness-enhanced PPO variant to better handle outdated training samples. Extensive experiments on math and code reasoning benchmarks show that AReaL achieves up to 2.77×\times training speedup compared to synchronous systems with the same number of GPUs and matched or improved final performance. The code of AReaL is available atthis https URL.

View on arXiv
@article{fu2025_2505.24298,
  title={ AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning },
  author={ Wei Fu and Jiaxuan Gao and Xujie Shen and Chen Zhu and Zhiyu Mei and Chuyi He and Shusheng Xu and Guo Wei and Jun Mei and Jiashu Wang and Tongkai Yang and Binhang Yuan and Yi Wu },
  journal={arXiv preprint arXiv:2505.24298},
  year={ 2025 }
}
Comments on this paper