ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18639
66
1

Span-Level Hallucination Detection for LLM-Generated Answers

25 April 2025
Passant Elchafei
Mervet Abu-Elkheir
    HILM
    LRM
ArXivPDFHTML
Abstract

Detecting spans of hallucination in LLM-generated answers is crucial for improving factual consistency. This paper presents a span-level hallucination detection framework for the SemEval-2025 Shared Task, focusing on English and Arabic texts. Our approach integrates Semantic Role Labeling (SRL) to decompose the answer into atomic roles, which are then compared with a retrieved reference context obtained via question-based LLM prompting. Using a DeBERTa-based textual entailment model, we evaluate each role semantic alignment with the retrieved context. The entailment scores are further refined through token-level confidence measures derived from output logits, and the combined scores are used to detect hallucinated spans. Experiments on the Mu-SHROOM dataset demonstrate competitive performance. Additionally, hallucinated spans have been verified through fact-checking by prompting GPT-4 and LLaMA. Our findings contribute to improving hallucination detection in LLM-generated responses.

View on arXiv
@article{elchafei2025_2504.18639,
  title={ Span-Level Hallucination Detection for LLM-Generated Answers },
  author={ Passant Elchafei and Mervet Abu-Elkheir },
  journal={arXiv preprint arXiv:2504.18639},
  year={ 2025 }
}
Comments on this paper