7
0

VISTA: Enhancing Vision-Text Alignment in MLLMs via Cross-Modal Mutual Information Maximization

Abstract

Current multimodal large language models (MLLMs) face a critical challenge in modality alignment, often exhibiting a bias towards textual information at the expense of other modalities like vision. This paper conducts a systematic information-theoretic analysis of the widely used cross-entropy loss in MLLMs, uncovering its implicit alignment objective. Our theoretical investigation reveals that this implicit objective has inherent limitations, leading to a degradation of cross-modal alignment as text sequence length increases, thereby hindering effective multimodal information fusion. To overcome these drawbacks, we propose Vision-Text Alignment (VISTA), a novel approach guided by our theoretical insights. VISTA introduces an explicit alignment objective designed to maximize cross-modal mutual information, preventing the degradation of visual alignment. Notably, VISTA enhances the visual understanding capabilities of existing MLLMs without requiring any additional trainable modules or extra training data, making it both efficient and practical. Our method significantly outperforms baseline models across more than a dozen benchmark datasets, including VQAv2, MMStar, and MME, paving the way for new directions in MLLM modal alignment research.

View on arXiv
@article{li2025_2505.10917,
  title={ VISTA: Enhancing Vision-Text Alignment in MLLMs via Cross-Modal Mutual Information Maximization },
  author={ Mingxiao Li and Na Su and Fang Qu and Zhizhou Zhong and Ziyang Chen and Yuan Li and Zhaopeng Tu and Xiaolong Li },
  journal={arXiv preprint arXiv:2505.10917},
  year={ 2025 }
}
Comments on this paper