21
0

MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs

Main:13 Pages
22 Figures
Bibliography:7 Pages
14 Tables
Appendix:20 Pages
Abstract

A critical component in the trustworthiness of LLMs is reliable uncertainty communication, yet LLMs often use assertive language when conveying false claims, leading to over-reliance and eroded trust. We present the first systematic study of faithful confidence calibration\textit{faithful confidence calibration} of LLMs, benchmarking models' ability to use linguistic expressions of uncertainty that faithfully reflect\textit{faithfully reflect} their intrinsic uncertainty, across a comprehensive array of models, datasets, and prompting strategies. Our results demonstrate that LLMs largely fail at this task, and that existing interventions are insufficient: standard prompt approaches provide only marginal gains, and existing, factuality-based calibration techniques can even harm faithful calibration. To address this critical gap, we introduce MetaFaith, a novel prompt-based calibration approach inspired by human metacognition. We show that MetaFaith robustly improves faithful calibration across diverse models and task domains, enabling up to 61% improvement in faithfulness and achieving an 83% win rate over original generations as judged by humans.

View on arXiv
@article{liu2025_2505.24858,
  title={ MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs },
  author={ Gabrielle Kaili-May Liu and Gal Yona and Avi Caciularu and Idan Szpektor and Tim G. J. Rudner and Arman Cohan },
  journal={arXiv preprint arXiv:2505.24858},
  year={ 2025 }
}
Comments on this paper