Reward models are essential for aligning language model outputs with human preferences, yet existing approaches often lack both controllability and interpretability. These models are typically optimized for narrow objectives, limiting their generalizability to broader downstream tasks. Moreover, their scalar outputs are difficult to interpret without contextual reasoning. To address these limitations, we introduce R3, a novel reward modeling framework that is rubric-agnostic, generalizable across evaluation dimensions, and provides interpretable, reasoned score assignments. R3 enables more transparent and flexible evaluation of language models, supporting robust alignment with diverse human values and use cases. Our models, data, and code are available as open source atthis https URL
View on arXiv@article{anugraha2025_2505.13388, title={ R3: Robust Rubric-Agnostic Reward Models }, author={ David Anugraha and Zilu Tang and Lester James V. Miranda and Hanyang Zhao and Mohammad Rifqi Farhansyah and Garry Kuwanto and Derry Wijaya and Genta Indra Winata }, journal={arXiv preprint arXiv:2505.13388}, year={ 2025 } }