ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07069
49
0

HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification

9 April 2025
Bibek Paudel
Alexander Lyzhov
Preetam Joshi
Puneet Anand
    HILM
ArXivPDFHTML
Abstract

This paper introduces a comprehensive system for detecting hallucinations in large language model (LLM) outputs in enterprise settings. We present a novel taxonomy of LLM responses specific to hallucination in enterprise applications, categorizing them into context-based, common knowledge, enterprise-specific, and innocuous statements. Our hallucination detection model HDM-2 validates LLM responses with respect to both context and generally known facts (common knowledge). It provides both hallucination scores and word-level annotations, enabling precise identification of problematic content. To evaluate it on context-based and common-knowledge hallucinations, we introduce a new dataset HDMBench. Experimental results demonstrate that HDM-2 out-performs existing approaches across RagTruth, TruthfulQA, and HDMBench datasets. This work addresses the specific challenges of enterprise deployment, including computational efficiency, domain specialization, and fine-grained error identification. Our evaluation dataset, model weights, and inference code are publicly available.

View on arXiv
@article{paudel2025_2504.07069,
  title={ HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification },
  author={ Bibek Paudel and Alexander Lyzhov and Preetam Joshi and Puneet Anand },
  journal={arXiv preprint arXiv:2504.07069},
  year={ 2025 }
}
Comments on this paper