Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.05746
Cited By
LLMs Will Always Hallucinate, and We Need to Live With This
9 September 2024
Sourav Banerjee
Ayushi Agarwal
Saloni Singla
HILM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LLMs Will Always Hallucinate, and We Need to Live With This"
24 / 24 papers shown
Title
Osiris: A Lightweight Open-Source Hallucination Detection System
Alex Shan
John Bauer
Christopher D. Manning
HILM
VLM
50
0
0
07 May 2025
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun
Y. Gai
Lijie Chen
Abhilasha Ravichander
Yejin Choi
D. Song
HILM
57
0
0
17 Apr 2025
Hallucination, reliability, and the role of generative AI in science
Charles Rathkopf
HILM
45
0
0
11 Apr 2025
Unraveling Human-AI Teaming: A Review and Outlook
Bowen Lou
Tian Lu
T. S. Raghu
Yingjie Zhang
26
0
0
08 Apr 2025
Automated Factual Benchmarking for In-Car Conversational Systems using Large Language Models
Rafael Giebisch
Ken E. Friedl
Lev Sorokin
Andrea Stocco
HILM
52
0
0
01 Apr 2025
Lost in Cultural Translation: Do LLMs Struggle with Math Across Cultural Contexts?
Aabid Karim
Abdul Karim
Bhoomika Lohana
Matt Keon
Jaswinder Singh
A. Sattar
52
0
0
23 Mar 2025
Logic-RAG: Augmenting Large Multimodal Models with Visual-Spatial Knowledge for Road Scene Understanding
Imran Kabir
Md. Alimoor Reza
Syed Masum Billah
ReLM
VLM
LRM
80
0
0
16 Mar 2025
HalluVerse25: Fine-grained Multilingual Benchmark Dataset for LLM Hallucinations
Samir Abdaljalil
Hasan Kurban
Erchin Serpedin
HILM
64
0
0
10 Mar 2025
SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment and Hallucination Mitigation in LLMs
Samir Abdaljalil
Filippo Pallucchini
Andrea Seveso
Hasan Kurban
Fabio Mercorio
Erchin Serpedin
HILM
74
0
0
04 Mar 2025
Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning
Wenjie Wu
Yongcheng Jing
Yingjie Wang
Wenbin Hu
Dacheng Tao
RALM
LRM
66
2
0
03 Mar 2025
Grandes modelos de lenguaje: de la predicción de palabras a la comprensión?
Carlos Gómez-Rodríguez
SyDa
AILaw
ELM
VLM
111
0
0
25 Feb 2025
`Generalization is hallucination' through the lens of tensor completions
Liang Ze Wong
VLM
70
0
0
24 Feb 2025
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
Zhenheng Tang
Xiang Liu
Qian Wang
Peijie Dong
Bingsheng He
Xiaowen Chu
Bo Li
LRM
61
1
0
24 Feb 2025
Valuable Hallucinations: Realizable Non-realistic Propositions
Qiucheng Chen
Bo Wang
LRM
59
0
0
16 Feb 2025
CondAmbigQA: A Benchmark and Dataset for Conditional Ambiguous Question Answering
Zongxi Li
Y. Li
Haoran Xie
S. J. Qin
70
0
0
03 Feb 2025
Large Language Models as Common-Sense Heuristics
Andrey Borro
Patricia J. Riddle
Michael W Barley
Michael Witbrock
LRM
LM&Ro
136
1
0
31 Jan 2025
Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow Thinking
Xiaoxue Cheng
Junyi Li
Wayne Xin Zhao
Zhicheng Dou
HILM
LRM
38
0
0
02 Jan 2025
MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation
Harsh Singh
Rocktim Jyoti Das
Mingfei Han
Preslav Nakov
Ivan Laptev
LM&Ro
LLMAG
76
2
0
26 Nov 2024
No Free Lunch: Fundamental Limits of Learning Non-Hallucinating Generative Models
Changlong Wu
A. Grama
Wojciech Szpankowski
32
1
0
24 Oct 2024
Smart ETL and LLM-based contents classification: the European Smart Tourism Tools Observatory experience
Diogo Cosme
António Galvão
Fernando Brito e Abreu
17
0
0
24 Oct 2024
A Survey of Uncertainty Estimation in LLMs: Theory Meets Practice
Hsiu-Yuan Huang
Yutong Yang
Zhaoxi Zhang
Sanwoo Lee
Yunfang Wu
44
9
0
20 Oct 2024
Not All Votes Count! Programs as Verifiers Improve Self-Consistency of Language Models for Math Reasoning
Vernon Y.H. Toh
Deepanway Ghosal
Soujanya Poria
LRM
54
2
0
16 Oct 2024
Truth or Deceit? A Bayesian Decoding Game Enhances Consistency and Reliability
Weitong Zhang
Chengqi Zang
Bernhard Kainz
31
0
0
01 Oct 2024
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
112
107
0
22 Sep 2022
1