35
0

MLLMs are Deeply Affected by Modality Bias

Main:9 Pages
4 Figures
Bibliography:6 Pages
2 Tables
Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have shown promising results in integrating diverse modalities such as texts and images. MLLMs are heavily influenced by modality bias, often relying on language while under-utilizing other modalities like visual inputs. This position paper argues that MLLMs are deeply affected by modality bias. Firstly, we diagnose the current state of modality bias, highlighting its manifestations across various tasks. Secondly, we propose a systematic research road-map related to modality bias in MLLMs. Thirdly, we identify key factors of modality bias in MLLMs and offer actionable suggestions for future research to mitigate it. To substantiate these findings, we conduct experiments that demonstrate the influence of each factor: 1. Data Characteristics: Language data is compact and abstract, while visual data is redundant and complex, creating an inherent imbalance in learning dynamics. 2. Imbalanced Backbone Capabilities: The dominance of pretrained language models in MLLMs leads to overreliance on language and neglect of visual information. 3. Training Objectives: Current objectives often fail to promote balanced cross-modal alignment, resulting in shortcut learning biased toward language. These findings highlight the need for balanced training strategies and model architectures to better integrate multiple modalities in MLLMs. We call for interdisciplinary efforts to tackle these challenges and drive innovation in MLLM research. Our work provides a fresh perspective on modality bias in MLLMs and offers insights for developing more robust and generalizable multimodal systems-advancing progress toward Artificial General Intelligence.

View on arXiv
@article{zheng2025_2505.18657,
  title={ MLLMs are Deeply Affected by Modality Bias },
  author={ Xu Zheng and Chenfei Liao and Yuqian Fu and Kaiyu Lei and Yuanhuiyi Lyu and Lutao Jiang and Bin Ren and Jialei Chen and Jiawen Wang and Chengxin Li and Linfeng Zhang and Danda Pani Paudel and Xuanjing Huang and Yu-Gang Jiang and Nicu Sebe and Dacheng Tao and Luc Van Gool and Xuming Hu },
  journal={arXiv preprint arXiv:2505.18657},
  year={ 2025 }
}
Comments on this paper