ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21157
73
0

Real-Time Evaluation Models for RAG: Who Detects Hallucinations Best?

27 March 2025
Ashish Sardana
    HILM
    VLM
ArXivPDFHTML
Abstract

This article surveys Evaluation models to automatically detect hallucinations in Retrieval-Augmented Generation (RAG), and presents a comprehensive benchmark of their performance across six RAG applications. Methods included in our study include: LLM-as-a-Judge, Prometheus, Lynx, the Hughes Hallucination Evaluation Model (HHEM), and the Trustworthy Language Model (TLM). These approaches are all reference-free, requiring no ground-truth answers/labels to catch incorrect LLM responses. Our study reveals that, across diverse RAG applications, some of these approaches consistently detect incorrect RAG responses with high precision/recall.

View on arXiv
@article{sardana2025_2503.21157,
  title={ Real-Time Evaluation Models for RAG: Who Detects Hallucinations Best? },
  author={ Ashish Sardana },
  journal={arXiv preprint arXiv:2503.21157},
  year={ 2025 }
}
Comments on this paper