40
0

Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs

Main:8 Pages
9 Figures
Bibliography:5 Pages
16 Tables
Appendix:10 Pages
Abstract

Large language models (LLMs) exhibit impressive fluency, but often produce critical errors known as "hallucinations". Uncertainty quantification (UQ) methods are a promising tool for coping with this fundamental shortcoming. Yet, existing UQ methods face challenges such as high computational overhead or reliance on supervised learning. Here, we aim to bridge this gap. In particular, we propose RAUQ (Recurrent Attention-based Uncertainty Quantification), an unsupervised approach that leverages intrinsic attention patterns in transformers to detect hallucinations efficiently. By analyzing attention weights, we identified a peculiar pattern: drops in attention to preceding tokens are systematically observed during incorrect generations for certain "uncertainty-aware" heads. RAUQ automatically selects such heads, recurrently aggregates their attention weights and token-level confidences, and computes sequence-level uncertainty scores in a single forward pass. Experiments across 4 LLMs and 12 question answering, summarization, and translation tasks demonstrate that RAUQ yields excellent results, outperforming state-of-the-art UQ methods using minimal computational overhead (<1% latency). Moreover, it requires no task-specific labels and no careful hyperparameter tuning, offering plug-and-play real-time hallucination detection in white-box LLMs.

View on arXiv
@article{vazhentsev2025_2505.20045,
  title={ Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs },
  author={ Artem Vazhentsev and Lyudmila Rvanova and Gleb Kuzmin and Ekaterina Fadeeva and Ivan Lazichny and Alexander Panchenko and Maxim Panov and Timothy Baldwin and Mrinmaya Sachan and Preslav Nakov and Artem Shelmanov },
  journal={arXiv preprint arXiv:2505.20045},
  year={ 2025 }
}
Comments on this paper