75
0

Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models

Main:9 Pages
4 Figures
Bibliography:5 Pages
13 Tables
Appendix:11 Pages
Abstract

Large language models (LLMs) have transformed natural language processing, but their reliable deployment requires effective uncertainty quantification (UQ). Existing UQ methods are often heuristic and lack a probabilistic foundation. This paper begins by providing a theoretical justification for the role of perturbations in UQ for LLMs. We then introduce a dual random walk perspective, modeling input-output pairs as two Markov chains with transition probabilities defined by semantic similarity. Building on this, we propose a fully probabilistic framework based on an inverse model, which quantifies uncertainty by evaluating the diversity of the input space conditioned on a given output through systematic perturbations. Within this framework, we define a new uncertainty measure, Inv-Entropy. A key strength of our framework is its flexibility: it supports various definitions of uncertainty measures, embeddings, perturbation strategies, and similarity metrics. We also propose GAAP, a perturbation algorithm based on genetic algorithms, which enhances the diversity of sampled inputs. In addition, we introduce a new evaluation metric, Temperature Sensitivity of Uncertainty (TSU), which directly assesses uncertainty without relying on correctness as a proxy. Extensive experiments demonstrate that Inv-Entropy outperforms existing semantic UQ methods. The code to reproduce the results can be found atthis https URL.

View on arXiv
@article{song2025_2506.09684,
  title={ Inv-Entropy: A Fully Probabilistic Framework for Uncertainty Quantification in Language Models },
  author={ Haoyi Song and Ruihan Ji and Naichen Shi and Fan Lai and Raed Al Kontar },
  journal={arXiv preprint arXiv:2506.09684},
  year={ 2025 }
}
Comments on this paper