ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.10153
  4. Cited By
Look Within, Why LLMs Hallucinate: A Causal Perspective

Look Within, Why LLMs Hallucinate: A Causal Perspective

14 July 2024
He Li
Haoang Chi
Mingyu Liu
Wenjing Yang
    LRM
ArXivPDFHTML

Papers citing "Look Within, Why LLMs Hallucinate: A Causal Perspective"

5 / 5 papers shown
Title
Position: Foundation Models Need Digital Twin Representations
Position: Foundation Models Need Digital Twin Representations
Yiqing Shen
Hao Ding
Lalithkumar Seenivasan
Tianmin Shu
Mathias Unberath
AI4CE
40
0
0
01 May 2025
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Muqing Miao
Michael Kearns
67
0
0
11 Feb 2025
Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual,
  Intensional, and Extensional Learning for Faithful Natural Language
  Generation
Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation
Adam Bouyamourn
100
15
0
23 Oct 2023
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1