Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.20315
Cited By
ANAH: Analytical Annotation of Hallucinations in Large Language Models
30 May 2024
Ziwei Ji
Yuzhe Gu
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai-xiang Chen
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ANAH: Analytical Annotation of Hallucinations in Large Language Models"
4 / 4 papers shown
Title
HalluLens: LLM Hallucination Benchmark
Yejin Bang
Ziwei Ji
Alan Schelten
Anthony Hartshorn
Tara Fowler
Cheng Zhang
Nicola Cancedda
Pascale Fung
HILM
92
0
0
24 Apr 2025
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Yuzhe Gu
Ziwei Ji
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai Chen
HILM
36
5
0
05 Jul 2024
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
82
253
0
22 May 2023
A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation
Tianyu Liu
Yizhe Zhang
Chris Brockett
Yi Mao
Zhifang Sui
Weizhu Chen
W. Dolan
HILM
219
143
0
18 Apr 2021
1