108

R-Align: Enhancing Generative Reward Models through Rationale-Centric Meta-Judging

Yanlin Lai
Mitt Huang
Hangyu Guo
Xiangfeng Wang
Haodong Li
Shaoxiong Zhan
Liang Zhao
Chengyuan Yao
Yinmin Zhang
Qi Han
Chun Yuan
Zheng Ge
Xiangyu Zhang
Daxin Jiang
Main:12 Pages
7 Figures
Bibliography:3 Pages
2 Tables
Appendix:6 Pages
Abstract

Reinforcement Learning from Human Feedback (RLHF) remains indispensable for aligning large language models (LLMs) in subjective domains. To enhance robustness, recent work shifts toward Generative Reward Models (GenRMs) that generate rationales before predicting preferences. Yet in GenRM training and evaluation, practice remains outcome-label-only, leaving reasoning quality unchecked. We show that reasoning fidelity-the consistency between a GenRM's preference decision and reference decision rationales-is highly predictive of downstream RLHF outcomes, beyond standard label accuracy. Specifically, we repurpose existing reward-model benchmarks to compute Spurious Correctness (S-Corr)-the fraction of label-correct decisions with rationales misaligned with golden judgments. Our empirical evaluation reveals substantial S-Corr even for competitive GenRMs, and higher S-Corr is associated with policy degeneration under optimization. To improve fidelity, we propose Rationale-Centric Alignment, R-Align, which augments training with gold judgments and explicitly supervises rationale alignment. R-Align reduces S-Corr on RM benchmarks and yields consistent gains in actor performance across STEM, coding, instruction following, and general tasks.

View on arXiv
Comments on this paper