ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17598
116
0

Hallucination Detection in LLMs Using Spectral Features of Attention Maps

24 February 2025
Jakub Binkowski
Denis Janiak
Albert Sawczyn
Bogdan Gabrys
Tomasz Kajdanowicz
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across various tasks but remain prone to hallucinations. Detecting hallucinations is essential for safety-critical applications, and recent methods leverage attention map properties to this end, though their effectiveness remains limited. In this work, we investigate the spectral features of attention maps by interpreting them as adjacency matrices of graph structures. We propose the LapEigvals\text{LapEigvals}LapEigvals method, which utilises the top-kkk eigenvalues of the Laplacian matrix derived from the attention maps as an input to hallucination detection probes. Empirical evaluations demonstrate that our approach achieves state-of-the-art hallucination detection performance among attention-based methods. Extensive ablation studies further highlight the robustness and generalisation of LapEigvals\text{LapEigvals}LapEigvals, paving the way for future advancements in the hallucination detection domain.

View on arXiv
@article{binkowski2025_2502.17598,
  title={ Hallucination Detection in LLMs Using Spectral Features of Attention Maps },
  author={ Jakub Binkowski and Denis Janiak and Albert Sawczyn and Bogdan Gabrys and Tomasz Kajdanowicz },
  journal={arXiv preprint arXiv:2502.17598},
  year={ 2025 }
}
Comments on this paper