Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.04175
Cited By
Confabulation: The Surprising Value of Large Language Model Hallucinations
6 June 2024
Peiqi Sui
Eamon Duede
Sophie Wu
Richard Jean So
HILM
LLMAG
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Confabulation: The Surprising Value of Large Language Model Hallucinations"
17 / 17 papers shown
Title
Synthetic Fluency: Hallucinations, Confabulations, and the Creation of Irish Words in LLM-Generated Translations
Sheila Castilho
Zoe Fitzsimmons
Claire Holton
Aoife Mc Donagh
33
0
0
10 Apr 2025
Block Toeplitz Sparse Precision Matrix Estimation for Large-Scale Interval-Valued Time Series Forecasting
Wan Tian
Zhongfeng Qin
AI4TS
36
0
0
04 Apr 2025
OAEI-LLM-T: A TBox Benchmark Dataset for Understanding Large Language Model Hallucinations in Ontology Matching
Zhangcheng Qiang
Kerry Taylor
Weiqing Wang
Jing Jiang
52
0
0
25 Mar 2025
`Generalization is hallucination' through the lens of tensor completions
Liang Ze Wong
VLM
70
0
0
24 Feb 2025
Memory Helps, but Confabulation Misleads: Understanding Streaming Events in Videos with MLLMs
Gengyuan Zhang
Mingcong Ding
Tong Liu
Yao Zhang
Volker Tresp
82
1
0
24 Feb 2025
Valuable Hallucinations: Realizable Non-realistic Propositions
Qiucheng Chen
Bo Wang
LRM
59
0
0
16 Feb 2025
DocETL: Agentic Query Rewriting and Evaluation for Complex Document Processing
Shreya Shankar
Tristan Chambers
Eugene Wu
Aditya G. Parameswaran
Eugene Wu
LLMAG
58
6
0
16 Oct 2024
On Classification with Large Language Models in Cultural Analytics
David Bamman
Kent K. Chang
L. Lucy
Naitian Zhou
28
4
0
15 Oct 2024
StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization
Zhuoqun Li
Xuanang Chen
Haiyang Yu
Hongyu Lin
Yaojie Lu
Qiaoyu Tang
Fei Huang
Xianpei Han
Le Sun
Yongbin Li
34
12
0
11 Oct 2024
AiBAT: Artificial Intelligence/Instructions for Build, Assembly, and Test
Benjamin Nuernberger
Anny Liu
Heather Stefanini
Richard Otis
Amanda Towler
R. Peter Dillon
26
0
0
03 Oct 2024
Towards a Science Exocortex
Kevin G. Yager
80
0
0
24 Jun 2024
Hallucination Detection and Hallucination Mitigation: An Investigation
Junliang Luo
Tianyu Li
Di Wu
Michael R. M. Jenkin
Steve Liu
Gregory Dudek
HILM
LLMAG
44
22
0
16 Jan 2024
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
Kevin Liu
Stephen Casper
Dylan Hadfield-Menell
Jacob Andreas
HILM
64
36
0
27 Nov 2023
Correction with Backtracking Reduces Hallucination in Summarization
Zhenzhen Liu
Chao-gang Wan
Varsha Kishore
Jin Peng Zhou
Minmin Chen
Kilian Q. Weinberger
HILM
26
3
0
24 Oct 2023
The Dark Side of ChatGPT: Legal and Ethical Challenges from Stochastic Parrots and Hallucination
Z. Li
AILaw
SILM
46
34
0
21 Apr 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
325
11,953
0
04 Mar 2022
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
178
145
0
30 Aug 2021
1