Lightweight Vision-Language Models (VLMs) are indispensable for resource-constrained applications. The prevailing approach to aligning vision and language models involves freezing both the vision encoder and the language model while training small connector modules. However, this strategy heavily depends on the intrinsic capabilities of the language model, which can be suboptimal for lightweight models with limited representational capacity. In this work, we investigate this alignment bottleneck through the lens of mutual information, demonstrating that the constrained capacity of the language model inherently limits the Effective Mutual Information (EMI) between multimodal inputs and outputs, thereby compromising alignment quality. To address this challenge, we propose TinyAlign, a novel framework inspired by Retrieval-Augmented Generation, which strategically retrieves relevant context from a memory bank to enrich multimodal inputs and enhance their alignment. Extensive empirical evaluations reveal that TinyAlign significantly reduces training loss, accelerates convergence, and enhances task performance. Remarkably, it allows models to achieve baseline-level performance with only 40\% of the fine-tuning data, highlighting exceptional data efficiency. Our work thus offers a practical pathway for developing more capable lightweight VLMs while introducing a fresh theoretical lens to better understand and address alignment bottlenecks in constrained multimodal systems.
View on arXiv@article{hu2025_2505.12884, title={ TinyAlign: Boosting Lightweight Vision-Language Models by Mitigating Modal Alignment Bottlenecks }, author={ Yuanze Hu and Zhaoxin Fan and Xinyu Wang and Gen Li and Ye Qiu and Zhichao Yang and Wenjun Wu and Kejian Wu and Yifan Sun and Xiaotie Deng and Jin Dong }, journal={arXiv preprint arXiv:2505.12884}, year={ 2025 } }