Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2411.09968
Cited By
Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs
15 November 2024
Xiaofeng Zhang
Yihao Quan
Chaochen Gu
Chen Shen
Xiaosong Yuan
Shaotian Yan
Hao Cheng
Kaijie Wu
Jieping Ye
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Seeing Clearly by Layer Two: Enhancing Attention Heads to Alleviate Hallucination in LVLMs"
8 / 8 papers shown
Title
Mitigating Object Hallucination via Robust Local Perception Search
Zixian Gao
Chao Yang
Zhanhui Zhou
Xing Xu
Chaochao Lu
MLLM
17
0
0
07 Jun 2025
Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration
Mehrdad Fazli
Bowen Wei
Ziwei Zhu
VLM
212
0
0
27 May 2025
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models
Liqiang Jing
Guiming Hardy Chen
Ehsan Aghazadeh
Xin Eric Wang
Xinya Du
137
0
0
04 May 2025
Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models
Mingi Jung
Saehuyng Lee
Eunji Kim
Sungroh Yoon
565
2
0
03 Feb 2025
First-place Solution for Streetscape Shop Sign Recognition Competition
Bin Wang
Li Jing
470
0
0
06 Jan 2025
Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence
Jinghan He
Kuan Zhu
Haiyun Guo
Sihang Li
Zhenglin Hua
Yuheng Jia
Ming Tang
Tat-Seng Chua
Jinqiao Wang
VLM
144
5
0
18 Dec 2024
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Xin Zou
Yizhou Wang
Yibo Yan
Yuanhuiyi Lyu
Kening Zheng
...
Junkai Chen
Peijie Jiang
Qingbin Liu
Chang Tang
Xuming Hu
165
8
0
04 Oct 2024
Hallucination of Multimodal Large Language Models: A Survey
Zechen Bai
Pichao Wang
Tianjun Xiao
Tong He
Zongbo Han
Zheng Zhang
Mike Zheng Shou
VLM
LRM
259
197
0
29 Apr 2024
1