ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19073
54
0

Towards Harmonized Uncertainty Estimation for Large Language Models

25 May 2025
Rui Li
Jing Long
Muge Qi
Heming Xia
Lei Sha
Peiyi Wang
Zhifang Sui
    UQCV
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:5 Pages
7 Tables
Appendix:3 Pages
Abstract

To facilitate robust and trustworthy deployment of large language models (LLMs), it is essential to quantify the reliability of their generations through uncertainty estimation. While recent efforts have made significant advancements by leveraging the internal logic and linguistic features of LLMs to estimate uncertainty scores, our empirical analysis highlights the pitfalls of these methods to strike a harmonized estimation between indication, balance, and calibration, which hinders their broader capability for accurate uncertainty estimation. To address this challenge, we propose CUE (Corrector for Uncertainty Estimation): A straightforward yet effective method that employs a lightweight model trained on data aligned with the target LLM's performance to adjust uncertainty scores. Comprehensive experiments across diverse models and tasks demonstrate its effectiveness, which achieves consistent improvements of up to 60% over existing methods.

View on arXiv
@article{li2025_2505.19073,
  title={ Towards Harmonized Uncertainty Estimation for Large Language Models },
  author={ Rui Li and Jing Long and Muge Qi and Heming Xia and Lei Sha and Peiyi Wang and Zhifang Sui },
  journal={arXiv preprint arXiv:2505.19073},
  year={ 2025 }
}
Comments on this paper