22

Diagnosing and Mitigating System Bias in Self-Rewarding RL

Main:8 Pages
8 Figures
Bibliography:3 Pages
3 Tables
Appendix:4 Pages
Abstract

Reinforcement learning with verifiable rewards (RLVR) scales the reasoning ability of large language models (LLMs) but remains bottlenecked by limited labeled samples for continued data scaling. Reinforcement learning with intrinsic rewards (RLIR), where the policy model assigns rewards to its own rollouts, enables sustainable scaling in unlabeled settings, yet its performance and stability lag behind RLVR. We trace this gap to a system bias: the model tends to overestimate its high-confidence rollouts, leading to biased and unstable reward estimation. This bias accumulates as training progresses, with deviations from the oracle drifting toward over-reward, causing unstable training. We characterize this bias using three metrics: ρnoise\rho_{\text{noise}}, ρselfbias\rho_{\text{selfbias}}, and ρsymbias\rho_{\text{symbias}}. We find that ρnoise\rho_{\text{noise}} and ρsymbias\rho_{\text{symbias}} impact convergence, while ρselfbias\rho_{\text{selfbias}} amplifies both correct and incorrect updates, leading to instability. To mitigate this, we propose reinforcement learning with ensembled rewards (RLER), which aggregates diverse models and adapts reward interpolation and rollout selection. Extensive experiments show that RLER improves by +13.6% over RLIR and is only 3.6% below RLVR, achieving stable scaling on unlabeled samples, making it highly applicable.

View on arXiv
Comments on this paper