Reward Inside the Model: A Lightweight Hidden-State Reward Model for LLM's Best-of-N sampling

High-quality reward models are crucial for unlocking the reasoning potential of large language models (LLMs), with best-of-N voting demonstrating significant performance gains. However, current reward models, which typically operate on the textual output of LLMs, are computationally expensive and parameter-heavy, limiting their real-world applications. We introduce the Efficient Linear Hidden State Reward (ELHSR) model - a novel, highly parameter-efficient approach that leverages the rich information embedded in LLM hidden states to address these issues. ELHSR systematically outperform baselines with less than 0.005% of the parameters of baselines, requiring only a few samples for training. ELHSR also achieves orders-of-magnitude efficiency improvement with significantly less time and fewer FLOPs per sample than baseline reward models. Moreover, ELHSR exhibits robust performance even when trained only on logits, extending its applicability to some closed-source LLMs. In addition, ELHSR can also be combined with traditional reward models to achieve additional performance gains.
View on arXiv@article{guo2025_2505.12225, title={ Reward Inside the Model: A Lightweight Hidden-State Reward Model for LLM's Best-of-N sampling }, author={ Jizhou Guo and Zhaomin Wu and Philip S. Yu }, journal={arXiv preprint arXiv:2505.12225}, year={ 2025 } }