Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.11283
Cited By
Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling
17 September 2024
Xinyue Fang
Zhen Huang
Zhiliang Tian
Minghui Fang
Ziyi Pan
Quntian Fang
Zhihua Wen
Hengyue Pan
Dongsheng Li
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling"
34 / 34 papers shown
Title
Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Zhihua Wen
Zhiliang Tian
Z. Jian
Zhen Huang
Pei Ke
Yifu Gao
Minlie Huang
Dongsheng Li
58
11
0
23 May 2024
ERATTA: Extreme RAG for Table To Answers with Large Language Models
Sohini Roychowdhury
Marko Krema
Anvar Mahammad
Brian Moore
Arijit Mukherjee
Punit Prakashchandra
RALM
32
3
0
07 May 2024
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
Derui Zhu
Dingfan Chen
Qing Li
Zongxiong Chen
Lei Ma
Jens Grossklags
Mario Fritz
HILM
54
11
0
06 Apr 2024
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations
Mahjabin Nahar
Haeseung Seo
Eun-Ju Lee
Aiping Xiong
Dongwon Lee
HILM
64
11
0
04 Apr 2024
SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination Detection
Bradley Paul Allen
Fina Polat
Paul T. Groth
VLM
36
2
0
04 Apr 2024
On Large Language Models' Hallucination with Regard to Known Facts
Che Jiang
Biqing Qi
Xiangyu Hong
Dayuan Fu
Yang Cheng
Fandong Meng
Mo Yu
Bowen Zhou
Jie Zhou
HILM
LRM
49
18
0
29 Mar 2024
Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding
Xintong Wang
Jingheng Pan
Liang Ding
Christian Biemann
MLLM
50
65
0
27 Mar 2024
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection
Moxin Li
Wenjie Wang
Fuli Feng
Fengbin Zhu
Qifan Wang
Tat-Seng Chua
HILM
LRM
62
18
0
15 Mar 2024
Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Ekaterina Fadeeva
Aleksandr Rubashevskii
Artem Shelmanov
Sergey Petrakov
Haonan Li
...
Gleb Kuzmin
Alexander Panchenko
Timothy Baldwin
Preslav Nakov
Maxim Panov
HILM
61
50
0
07 Mar 2024
In Search of Truth: An Interrogation Approach to Hallucination Detection
Yakir Yehuda
Itzik Malkiel
Oren Barkan
Jonathan Weill
Royi Ronen
Noam Koenigstein
HILM
34
8
0
05 Mar 2024
Enabling Weak LLMs to Judge Response Reliability via Meta Ranking
Zijun Liu
Boqun Kou
Peng Li
Ming Yan
Ji Zhang
Fei Huang
Yang Liu
60
2
0
19 Feb 2024
DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection
Herun Wan
Shangbin Feng
Zhaoxuan Tan
Heng Wang
Yulia Tsvetkov
Minnan Luo
77
30
0
16 Feb 2024
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection
Chao Chen
Kai-Chun Liu
Ze Chen
Yi Gu
Yue-bo Wu
Mingyuan Tao
Zhihang Fu
Jieping Ye
HILM
92
95
0
06 Feb 2024
Reducing LLM Hallucinations using Epistemic Neural Networks
Shreyas Verma
Kien Tran
Yusuf Ali
Guangyu Min
71
8
0
25 Dec 2023
On Early Detection of Hallucinations in Factual Question Answering
Ben Snyder
Marius Moisescu
Muhammad Bilal Zafar
HILM
72
25
0
19 Dec 2023
DelucionQA: Detecting Hallucinations in Domain-specific Question Answering
Mobashir Sadat
Zhengyu Zhou
Lukas Lange
Jun Araki
Arsalan Gundroo
Bingqing Wang
Rakesh R Menon
Md. Rizwan Parvez
Zhe Feng
HILM
47
37
0
08 Dec 2023
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
84
51
0
22 Nov 2023
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
Haoqiang Kang
Juntong Ni
Huaxiu Yao
HILM
LRM
49
34
0
15 Nov 2023
SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
Jiaxin Zhang
Zhuohang Li
Kamalika Das
Bradley Malin
Kumar Sricharan
HILM
LRM
28
59
0
03 Nov 2023
FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge
Farima Fatahi Bayat
Kun Qian
Benjamin Han
Yisi Sang
Anton Belyi
Samira Khorshidi
Fei Wu
Ihab F. Ilyas
Yunyao Li
HILM
57
25
0
26 Oct 2023
Chainpoll: A high efficacy method for LLM hallucination detection
Robert Friel
Atindriyo Sanyal
LRM
HILM
46
27
0
22 Oct 2023
A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection
Shiping Yang
Renliang Sun
Xiao-Yi Wan
HILM
60
41
0
10 Oct 2023
GROVE: A Retrieval-augmented Complex Story Generation Framework with A Forest of Evidence
Zhihua Wen
Zhiliang Tian
Wei Wu
Yuxin Yang
Yanqi Shi
Zhen Huang
Dongsheng Li
RALM
68
14
0
09 Oct 2023
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Mert Yuksekgonul
Varun Chandrasekaran
Erik Jones
Suriya Gunasekar
Ranjita Naik
Hamid Palangi
Ece Kamar
Besmira Nushi
HILM
34
44
0
26 Sep 2023
DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for Domain Adaptation
Menglong Lu
Zhen Huang
Yunxiang Zhao
Zhiliang Tian
Yang Liu
Dongsheng Li
55
6
0
05 Aug 2023
Meta-Tsallis-Entropy Minimization: A New Self-Training Approach for Domain Adaptation on Text Classification
Menglong Lu
Zhen Huang
Zhiliang Tian
Yunxiang Zhao
Xuanyu Fei
Dongsheng Li
OOD
71
5
0
04 Aug 2023
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Neeraj Varshney
Wenlin Yao
Hongming Zhang
Jianshu Chen
Dong Yu
HILM
73
167
0
08 Jul 2023
LM vs LM: Detecting Factual Errors via Cross Examination
Roi Cohen
May Hamri
Mor Geva
Amir Globerson
HILM
69
132
0
22 May 2023
RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought
Tianci Xue
Ziqi Wang
Zhenhailong Wang
Chi Han
Pengfei Yu
Heng Ji
KELM
LRM
67
35
0
19 May 2023
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
Yang Liu
Dan Iter
Yichong Xu
Shuohang Wang
Ruochen Xu
Chenguang Zhu
ELM
ALM
LM&MA
148
1,138
0
29 Mar 2023
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
Chengwei Qin
Aston Zhang
Zhuosheng Zhang
Jiaao Chen
Michihiro Yasunaga
Diyi Yang
LM&MA
AI4MH
LRM
ELM
122
689
0
08 Feb 2023
How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
Zhengbao Jiang
Jun Araki
Haibo Ding
Graham Neubig
UQCV
46
428
0
02 Dec 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
91
1,230
0
25 Feb 2020
Modeling Relational Data with Graph Convolutional Networks
Michael Schlichtkrull
Thomas Kipf
Peter Bloem
Rianne van den Berg
Ivan Titov
Max Welling
GNN
150
4,772
0
17 Mar 2017
1