Vision Transformers (ViTs) are essential as foundation backbones in establishing the visual comprehension capabilities of Multimodal Large Language Models (MLLMs). Although most ViTs achieve impressive performance through image-text pair-based contrastive learning or self-supervised mechanisms, they struggle to engage in connector-based co-training directly with LLMs due to potential parameter initialization conflicts and modality semantic gaps. To address the above challenges, this paper proposes SAILViT, a gradual feature learning-enhanced ViT for facilitating MLLMs to break through performance bottlenecks in complex multimodal interactions. SAILViT achieves coarse-to-fine-grained feature alignment and world knowledge infusion with gradual feature refinement, which better serves target training demands. We perform thorough empirical analyses to confirm the powerful robustness and generalizability of SAILViT across different dimensions, including parameter sizes, model architectures, training strategies, and data scales. Equipped with SAILViT, existing MLLMs show significant and consistent performance improvements on the OpenCompass benchmark across extensive downstream tasks. SAILViT series models are released atthis https URL.
View on arXiv@article{yin2025_2507.01643, title={ SAILViT: Towards Robust and Generalizable Visual Backbones for MLLMs via Gradual Feature Refinement }, author={ Weijie Yin and Dingkang Yang and Hongyuan Dong and Zijian Kang and Jiacong Wang and Xiao Liang and Chao Feng and Jiao Ran }, journal={arXiv preprint arXiv:2507.01643}, year={ 2025 } }