ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09338
55
0

Know What You Don't Know: Uncertainty Calibration of Process Reward Models

11 June 2025
Young-Jin Park
Kristjan Greenewald
Kaveh Alim
Hao Wang
Navid Azizan
    LRM
ArXiv (abs)PDFHTML
16 Figures
7 Tables
Appendix:33 Pages
Abstract

Process reward models (PRMs) play a central role in guiding inference-time scaling algorithms for large language models (LLMs). However, we observe that even state-of-the-art PRMs can be poorly calibrated and often overestimate success probabilities. To address this, we present a calibration approach, performed via quantile regression, that adjusts PRM outputs to better align with true success probabilities. Leveraging these calibrated success estimates and their associated confidence bounds, we introduce an \emph{instance-adaptive scaling} (IAS) framework that dynamically adjusts the inference budget based on the estimated likelihood that a partial reasoning trajectory will yield a correct final answer. Unlike conventional methods that allocate a fixed number of reasoning trajectories per query, this approach successfully adapts to each instance and reasoning step when using our calibrated PRMs. Experiments on mathematical reasoning benchmarks show that (i) our PRM calibration method successfully achieves small calibration error, outperforming the baseline methods, (ii) calibration is crucial for enabling effective adaptive scaling, and (iii) the proposed IAS strategy reduces inference costs while maintaining final answer accuracy, utilizing less compute on more confident problems as desired.

View on arXiv
@article{park2025_2506.09338,
  title={ Know What You Don't Know: Uncertainty Calibration of Process Reward Models },
  author={ Young-Jin Park and Kristjan Greenewald and Kaveh Alim and Hao Wang and Navid Azizan },
  journal={arXiv preprint arXiv:2506.09338},
  year={ 2025 }
}
Comments on this paper