ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10638
28
6

Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly

15 June 2024
Yexin Liu
Zhengyang Liang
Yueze Wang
Muyang He
Jian Li
Bo-Lu Zhao
Jian Li
Zheng Liu
Harry Yang
Sernam Lim
Bo Zhao
ArXivPDFHTML
Abstract

Multimodal Large Language Models (MLLMs) have displayed remarkable performance in multi-modal tasks, particularly in visual comprehension. However, we reveal that MLLMs often generate incorrect answers even when they understand the visual content. To this end, we manually construct a benchmark with 12 categories and design evaluation metrics that assess the degree of error in MLLM responses even when the visual content is seemingly understood. Based on this benchmark, we test 15 leading MLLMs and analyze the distribution of attention maps and logits of some MLLMs. Our investigation identifies two primary issues: 1) most instruction tuning datasets predominantly feature questions that 'directly' relate to the visual content, leading to a bias in MLLMs' responses to other indirect questions, and 2) MLLMs' attention to visual tokens is notably lower than to system and question tokens. We further observe that attention scores between questions and visual tokens as well as the model's confidence in the answers are lower in response to misleading questions than to straightforward ones. To address the first challenge, we introduce a paired positive and negative data construction pipeline to diversify the dataset. For the second challenge, we propose to enhance the model's focus on visual content during decoding by refining the text and visual prompt. For the text prompt, we propose a content guided refinement strategy that performs preliminary visual content analysis to generate structured information before answering the question. Additionally, we employ a visual attention refinement strategy that highlights question-relevant visual tokens to increase the model's attention to visual content that aligns with the question. Extensive experiments demonstrate that these challenges can be significantly mitigated with our proposed dataset and techniques.

View on arXiv
@article{liu2025_2406.10638,
  title={ Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly },
  author={ Yexin Liu and Zhengyang Liang and Yueze Wang and Xianfeng Wu and Feilong Tang and Muyang He and Jian Li and Zheng Liu and Harry Yang and Sernam Lim and Bo Zhao },
  journal={arXiv preprint arXiv:2406.10638},
  year={ 2025 }
}
Comments on this paper