Token-Level Uncertainty Estimation for Large Language Model Reasoning
While Large Language Models (LLMs) have demonstrated impressive capabilities, their output quality remains inconsistent across various application scenarios, making it difficult to identify trustworthy responses, especially in complex tasks requiring multi-step reasoning. In this paper, we propose a token-level uncertainty estimation framework to enable LLMs to self-assess and self-improve their generation quality in mathematical reasoning. Specifically, we introduce low-rank random weight perturbation to LLM decoding, generating predictive distributions that we use to estimate token-level uncertainties. We then aggregate these uncertainties to reflect semantic uncertainty of the generated sequences. Experiments on mathematical reasoning datasets of varying difficulty demonstrate that our token-level uncertainty metrics strongly correlate with answer correctness and model robustness. Additionally, we explore using uncertainty to directly enhance the model's reasoning performance through multiple generations and the particle filtering algorithm. Our approach consistently outperforms existing uncertainty estimation methods, establishing effective uncertainty estimation as a valuable tool for both evaluating and improving reasoning generation in LLMs.
View on arXiv@article{zhang2025_2505.11737, title={ Token-Level Uncertainty Estimation for Large Language Model Reasoning }, author={ Tunyu Zhang and Haizhou Shi and Yibin Wang and Hengyi Wang and Xiaoxiao He and Zhuowei Li and Haoxian Chen and Ligong Han and Kai Xu and Huan Zhang and Dimitris Metaxas and Hao Wang }, journal={arXiv preprint arXiv:2505.11737}, year={ 2025 } }