ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16894
  4. Cited By
Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs

Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs

22 May 2025
Zeyu Wei
Shuo Wang
Xiaohui Rong
Xuemin Liu
He Li
    HILM
ArXiv (abs)PDFHTML

Papers citing "Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs"

1 / 1 papers shown
Title
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis
Peiran Wang
Yang Liu
Yunfei Lu
Jue Hong
Ye Wu
HILMLRM
131
1
0
20 Feb 2025
1