Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.08117
Cited By
Insights into Classifying and Mitigating LLMs' Hallucinations
14 November 2023
Alessandro Bruno
P. Mazzeo
Aladine Chetouani
Marouane Tliba
M. A. Kerkouri
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Insights into Classifying and Mitigating LLMs' Hallucinations"
13 / 13 papers shown
Title
"Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak
Lingrui Mei
Shenghua Liu
Yiwei Wang
Baolong Bi
Jiayi Mao
Xueqi Cheng
AAML
47
9
0
17 Jun 2024
Detecting and Mitigating Hallucinations in Multilingual Summarisation
Yifu Qiu
Yftah Ziser
Anna Korhonen
E. Ponti
Shay B. Cohen
HILM
59
43
0
23 May 2023
Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery
Debadutta Dash
Rahul Thapa
Juan M. Banda
Akshay Swaminathan
Morgan Cheatham
...
Garret K. Morris
H. Magon
M. Lungren
Eric Horvitz
N. Shah
ELM
LM&MA
AI4MH
68
51
0
26 Apr 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
218
301
0
26 Apr 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
152
396
0
15 Mar 2023
An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks
Yuxiang Wu
Yu Zhao
Baotian Hu
Pasquale Minervini
Pontus Stenetorp
Sebastian Riedel
RALM
KELM
51
43
0
30 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
253
1,073
0
05 Oct 2022
SKILL: Structured Knowledge Infusion for Large Language Models
Fedor Moiseev
Zhe Dong
Enrique Alfonseca
Martin Jaggi
KELM
58
58
0
17 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
339
12,003
0
04 Mar 2022
Let there be a clock on the beach: Reducing Object Hallucination in Image Captioning
Ali Furkan Biten
L. G. I. Bigorda
Dimosthenis Karatzas
97
57
0
04 Oct 2021
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
178
146
0
30 Aug 2021
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
ALM
195
79
0
30 Apr 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
249
205
0
25 Sep 2019
1