Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.18344
Cited By
Chainpoll: A high efficacy method for LLM hallucination detection
22 October 2023
Robert Friel
Atindriyo Sanyal
LRM
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Chainpoll: A high efficacy method for LLM hallucination detection"
16 / 16 papers shown
Title
Patchwork: A Unified Framework for RAG Serving
Bodun Hu
Luis Pabon
Saurabh Agarwal
Aditya Akella
23
0
0
01 May 2025
ML For Hardware Design Interpretability: Challenges and Opportunities
Raymond Baartmans
Andrew Ensinger
Victor Agostinelli
Lizhong Chen
29
0
0
11 Apr 2025
Can Large Audio-Language Models Truly Hear? Tackling Hallucinations with Multi-Task Assessment and Stepwise Audio Reasoning
Chun-Yi Kuan
Hung-yi Lee
AuLLM
LRM
72
1
0
03 Jan 2025
LLM Hallucination Reasoning with Zero-shot Knowledge Test
Seongmin Lee
Hsiang Hsu
Chun-Fu Chen
LRM
39
2
0
14 Nov 2024
RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation
Ian Poey
Jiajun Liu
Qishuai Zhong
Adrien Chenailler
58
0
0
06 Nov 2024
A Debate-Driven Experiment on LLM Hallucinations and Accuracy
Ray Li
Tanishka Bagade
Kevin Martinez
Flora Yasmin
Grant Ayala
Michael Lam
Kevin Zhu
HILM
27
0
0
25 Oct 2024
ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability
ZhongXiang Sun
Xiaoxue Zang
Kai Zheng
Yang Song
Jun Xu
Xiao Zhang
Weijie Yu
Yang Song
Han Li
57
7
0
15 Oct 2024
SafeLLM: Domain-Specific Safety Monitoring for Large Language Models: A Case Study of Offshore Wind Maintenance
Connor Walker
Callum Rothon
Koorosh Aslansefat
Y. Papadopoulos
Nina Dethlefs
18
0
0
06 Oct 2024
Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling
Xinyue Fang
Zhen Huang
Zhiliang Tian
Minghui Fang
Ziyi Pan
Quntian Fang
Zhihua Wen
Hengyue Pan
Dongsheng Li
HILM
93
2
0
17 Sep 2024
SUKHSANDESH: An Avatar Therapeutic Question Answering Platform for Sexual Education in Rural India
Salam Michael Singh
Shubhmoy Kumar Garg
Amitesh Misra
Aaditeshwar Seth
Tanmoy Chakraborty
31
0
0
03 May 2024
Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation
Siya Qi
Yulan He
Zheng Yuan
LRM
HILM
38
1
0
18 Apr 2024
Multicalibration for Confidence Scoring in LLMs
Gianluca Detommaso
Martín Bertrán
Riccardo Fogliato
Aaron Roth
26
12
0
06 Apr 2024
SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination Detection
Bradley Paul Allen
Fina Polat
Paul T. Groth
VLM
26
2
0
04 Apr 2024
LightHouse: A Survey of AGI Hallucination
Feng Wang
LRM
HILM
VLM
32
3
0
08 Jan 2024
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
Lei Huang
Weijiang Yu
Weitao Ma
Weihong Zhong
Zhangyin Feng
...
Qianglong Chen
Weihua Peng
Xiaocheng Feng
Bing Qin
Ting Liu
LRM
HILM
39
722
0
09 Nov 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
152
391
0
15 Mar 2023
1