ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20977
7
0

Evaluating and Steering Modality Preferences in Multimodal Large Language Model

27 May 2025
Yu Zhang
Jinlong Ma
Yongshuai Hou
Xuefeng Bai
Kehai Chen
Yang Xiang
Jun Yu
Min Zhang
ArXivPDFHTML
Abstract

Multimodal large language models (MLLMs) have achieved remarkable performance on complex tasks with multimodal context. However, it is still understudied whether they exhibit modality preference when processing multimodal contexts. To study this question, we first build a \textbf{MC\textsuperscript{2}} benchmark under controlled evidence conflict scenarios to systematically evaluate modality preference, which is the tendency to favor one modality over another when making decisions based on multimodal conflicting evidence. Our extensive evaluation reveals that all 18 tested MLLMs generally demonstrate clear modality bias, and modality preference can be influenced by external interventions. An in-depth analysis reveals that the preference direction can be captured within the latent representations of MLLMs. Built on this, we propose a probing and steering method based on representation engineering to explicitly control modality preference without additional fine-tuning or carefully crafted prompts. Our method effectively amplifies modality preference toward a desired direction and applies to downstream tasks such as hallucination mitigation and multimodal machine translation, yielding promising improvements.

View on arXiv
@article{zhang2025_2505.20977,
  title={ Evaluating and Steering Modality Preferences in Multimodal Large Language Model },
  author={ Yu Zhang and Jinlong Ma and Yongshuai Hou and Xuefeng Bai and Kehai Chen and Yang Xiang and Jun Yu and Min Zhang },
  journal={arXiv preprint arXiv:2505.20977},
  year={ 2025 }
}
Comments on this paper