56
0
v1v2 (latest)

Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening

Main:8 Pages
10 Figures
Bibliography:2 Pages
4 Tables
Appendix:2 Pages
Abstract

Reinforcement learning is emerging as a primary driver for improving language model reasoning capabilities. A fundamental question is whether current reinforcement learning algorithms -- such as Group Relative Policy Optimization (GRPO), the de facto standard algorithm used to improve language model reasoning -- merely sharpen the base model's distribution around problems it can already solve. We investigate this question in the context of formal theorem proving, which has access to a perfect verifier. We identify a degenerate rank bias in GRPO in which highly probable trajectories are reinforced and rare ones are neglected. This results in distribution sharpening: the model can solve some problems with fewer samples, but underperforms simply sampling more solutions from the original model. To overcome GRPO's rank bias we introduce unlikeliness reward, a simple method for explicitly up-weighting rare but correct solutions. We show that unlikeliness reward mitigates rank bias and improves pass@NN across a large range of NN in both synthetic and real theorem proving settings. We also uncover an unexpected link between rank bias and a seemingly mundane hyperparameter -- the number of updates per batch -- that leads to a second, complementary mitigation. We combine our insights into a revised GRPO training recipe for formal theorem proving, yielding an open pipeline that achieves competitive performance to DeepSeek-Prover-V1.5-RL on the miniF2F-test benchmark. We release our implementation atthis https URL

View on arXiv
@article{he2025_2506.02355,
  title={ Rewarding the Unlikely: Lifting GRPO Beyond Distribution Sharpening },
  author={ Andre He and Daniel Fried and Sean Welleck },
  journal={arXiv preprint arXiv:2506.02355},
  year={ 2025 }
}
Comments on this paper