35

Process Supervision for Chain-of-Thought Reasoning via Monte Carlo Net Information Gain

Corentin Royer
Debarun Bhattacharjya
Gaetano Rossiello
Andrea Giovannini
Mennatallah El-Assady
Main:8 Pages
9 Figures
Bibliography:2 Pages
11 Tables
Appendix:8 Pages
Abstract

Multi-step reasoning improves the capabilities of large language models (LLMs) but increases the risk of errors propagating through intermediate steps. Process reward models (PRMs) mitigate this by scoring each step individually, enabling fine-grained supervision and improved reliability. Existing methods for training PRMs rely on costly human annotations or computationally intensive automatic labeling. We propose a novel approach to automatically generate step-level labels using Information Theory. Our method estimates how each reasoning step affects the likelihood of the correct answer, providing a signal of step quality. Importantly, it reduces computational complexity to O(N)\mathcal{O}(N), improving over the previous O(NlogN)\mathcal{O}(N \log N) methods. We demonstrate that these labels enable effective chain-of-thought selection in best-of-KK evaluation settings across diverse reasoning benchmarks, including mathematics, Python programming, SQL, and scientific question answering. This work enables scalable and efficient supervision of LLM reasoning, particularly for tasks where error propagation is critical.

View on arXiv
Comments on this paper