Towards Long Context Hallucination Detection

Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. However, they are prone to contextual hallucination, generating information that is either unsubstantiated or contradictory to the given context. Although many studies have investigated contextual hallucinations in LLMs, addressing them in long-context inputs remains an open problem. In this work, we take an initial step toward solving this problem by constructing a dataset specifically designed for long-context hallucination detection. Furthermore, we propose a novel architecture that enables pre-trained encoder models, such as BERT, to process long contexts and effectively detect contextual hallucinations through a decomposition and aggregation mechanism. Our experimental results show that the proposed architecture significantly outperforms previous models of similar size as well as LLM-based models across various metrics, while providing substantially faster inference.
View on arXiv@article{liu2025_2504.19457, title={ Towards Long Context Hallucination Detection }, author={ Siyi Liu and Kishaloy Halder and Zheng Qi and Wei Xiao and Nikolaos Pappas and Phu Mon Htut and Neha Anna John and Yassine Benajiba and Dan Roth }, journal={arXiv preprint arXiv:2504.19457}, year={ 2025 } }