Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.10153
Cited By
Look Within, Why LLMs Hallucinate: A Causal Perspective
14 July 2024
He Li
Haoang Chi
Mingyu Liu
Wenjing Yang
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Look Within, Why LLMs Hallucinate: A Causal Perspective"
5 / 5 papers shown
Title
Position: Foundation Models Need Digital Twin Representations
Yiqing Shen
Hao Ding
Lalithkumar Seenivasan
Tianmin Shu
Mathias Unberath
AI4CE
40
0
0
01 May 2025
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Muqing Miao
Michael Kearns
67
0
0
11 Feb 2025
Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation
Adam Bouyamourn
100
15
0
23 Oct 2023
What Makes Good In-Context Examples for GPT-
3
3
3
?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1