204

Calibrating LLM Judges: Linear Probes for Fast and Reliable Uncertainty Estimation

Bhaktipriya Radharapu
Eshika Saxena
Kenneth Li
Chenxi Whitehouse
Adina Williams
Nicola Cancedda
Main:6 Pages
9 Figures
Bibliography:4 Pages
10 Tables
Appendix:13 Pages
Abstract

As LLM-based judges become integral to industry applications, obtaining well-calibrated uncertainty estimates efficiently has become critical for production deployment. However, existing techniques, such as verbalized confidence and multi-generation methods, are often either poorly calibrated or computationally expensive. We introduce linear probes trained with a Brier score-based loss to provide calibrated uncertainty estimates from reasoning judges' hidden states, requiring no additional model training. We evaluate our approach on both objective tasks (reasoning, mathematics, factuality, coding) and subjective human preference judgments. Our results demonstrate that probes achieve superior calibration compared to existing methods with 10\approx10x computational savings, generalize robustly to unseen evaluation domains, and deliver higher accuracy on high-confidence predictions. However, probes produce conservative estimates that underperform on easier datasets but may benefit safety-critical deployments prioritizing low false-positive rates. Overall, our work demonstrates that interpretability-based uncertainty estimation provides a practical and scalable plug-and-play solution for LLM judges in production.

View on arXiv
Comments on this paper