ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10063
28
0

Hallucination Detection in LLMs via Topological Divergence on Attention Graphs

14 April 2025
Alexandra Bazarova
Aleksandr Yugay
Andrey Shulga
A. Ermilova
Andrei Volodichev
Konstantin Polev
Julia Belikova
Rauf Parchiev
Dmitry Simakov
M. Savchenko
Andrey Savchenko
Serguei Barannikov
Alexey Zaytsev
    HILM
ArXivPDFHTML
Abstract

Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models (LLMs). We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting, which leverages a topological divergence metric to quantify the structural properties of graphs induced by attention matrices. Examining the topological divergence between prompt and response subgraphs reveals consistent patterns: higher divergence values in specific attention heads correlate with hallucinated outputs, independent of the dataset. Extensive experiments, including evaluation on question answering and data-to-text tasks, show that our approach achieves state-of-the-art or competitive results on several benchmarks, two of which were annotated by us and are being publicly released to facilitate further research. Beyond its strong in-domain performance, TOHA maintains remarkable domain transferability across multiple open-source LLMs. Our findings suggest that analyzing the topological structure of attention matrices can serve as an efficient and robust indicator of factual reliability in LLMs.

View on arXiv
@article{bazarova2025_2504.10063,
  title={ Hallucination Detection in LLMs via Topological Divergence on Attention Graphs },
  author={ Alexandra Bazarova and Aleksandr Yugay and Andrey Shulga and Alina Ermilova and Andrei Volodichev and Konstantin Polev and Julia Belikova and Rauf Parchiev and Dmitry Simakov and Maxim Savchenko and Andrey Savchenko and Serguei Barannikov and Alexey Zaytsev },
  journal={arXiv preprint arXiv:2504.10063},
  year={ 2025 }
}
Comments on this paper