UFO-RL: Uncertainty-Focused Optimization for Efficient Reinforcement Learning Data Selection

Scaling RL for LLMs is computationally expensive, largely due to multi-sampling for policy optimization and evaluation, making efficient data selection crucial. Inspired by the Zone of Proximal Development (ZPD) theory, we hypothesize LLMs learn best from data within their potential comprehension zone. Addressing the limitation of conventional, computationally intensive multi-sampling methods for data assessment, we introduce UFO-RL. This novel framework uses a computationally efficient single-pass uncertainty estimation to identify informative data instances, achieving up to 185x faster data evaluation. UFO-RL leverages this metric to select data within the estimated ZPD for training. Experiments show that training with just 10% of data selected by UFO-RL yields performance comparable to or surpassing full-data training, reducing overall training time by up to 16x while enhancing stability and generalization. UFO-RL offers a practical and highly efficient strategy for scaling RL fine-tuning of LLMs by focusing learning on valuable data.
View on arXiv@article{zhao2025_2505.12457, title={ UFO-RL: Uncertainty-Focused Optimization for Efficient Reinforcement Learning Data Selection }, author={ Yang Zhao and Kai Xiong and Xiao Ding and Li Du and YangouOuyang and Zhouhao Sun and Jiannan Guan and Wenbin Zhang and Bin Liu and Dong Hu and Bing Qin and Ting Liu }, journal={arXiv preprint arXiv:2505.12457}, year={ 2025 } }