This article surveys Evaluation models to automatically detect hallucinations in Retrieval-Augmented Generation (RAG), and presents a comprehensive benchmark of their performance across six RAG applications. Methods included in our study include: LLM-as-a-Judge, Prometheus, Lynx, the Hughes Hallucination Evaluation Model (HHEM), and the Trustworthy Language Model (TLM). These approaches are all reference-free, requiring no ground-truth answers/labels to catch incorrect LLM responses. Our study reveals that, across diverse RAG applications, some of these approaches consistently detect incorrect RAG responses with high precision/recall.
View on arXiv@article{sardana2025_2503.21157, title={ Real-Time Evaluation Models for RAG: Who Detects Hallucinations Best? }, author={ Ashish Sardana }, journal={arXiv preprint arXiv:2503.21157}, year={ 2025 } }