68
0

Attention Head Embeddings with Trainable Deep Kernels for Hallucination Detection in LLMs

Main:7 Pages
6 Figures
Bibliography:2 Pages
7 Tables
Appendix:1 Pages
Abstract

We present a novel approach for detecting hallucinations in large language models (LLMs) by analyzing the probabilistic divergence between prompt and response hidden-state distributions. Counterintuitively, we find that hallucinated responses exhibit smaller deviations from their prompts compared to grounded responses, suggesting that hallucinations often arise from superficial rephrasing rather than substantive reasoning. Leveraging this insight, we propose a model-intrinsic detection method that uses distributional distances as principled hallucination scores, eliminating the need for external knowledge or auxiliary models. To enhance sensitivity, we employ deep learnable kernels that automatically adapt to capture nuanced geometric differences between distributions. Our approach outperforms existing baselines, demonstrating state-of-the-art performance on several benchmarks. The method remains competitive even without kernel training, offering a robust, scalable solution for hallucination detection.

View on arXiv
@article{oblovatny2025_2506.09886,
  title={ Attention Head Embeddings with Trainable Deep Kernels for Hallucination Detection in LLMs },
  author={ Rodion Oblovatny and Alexandra Bazarova and Alexey Zaytsev },
  journal={arXiv preprint arXiv:2506.09886},
  year={ 2025 }
}
Comments on this paper