The choice of a suitable visual language projector (VLP) is critical to the successful training of large visual language models (LVLMs). Mainstream VLPs can be broadly categorized into compressed and uncompressed projectors, and each offering distinct advantages in performance and computational efficiency. However, their security implications have not been thoroughly examined. Our comprehensive evaluation reveals significant differences in their security profiles: compressed projectors exhibit substantial vulnerabilities, allowing adversaries to successfully compromise LVLMs even with minimal knowledge of structural information. In stark contrast, uncompressed projectors demonstrate robust security properties and do not introduce additional vulnerabilities. These findings provide critical guidance for researchers in selecting optimal VLPs that enhance the security and reliability of visual language models. The code will be released.
View on arXiv@article{zhang2025_2506.00534, title={ The Security Threat of Compressed Projectors in Large Vision-Language Models }, author={ Yudong Zhang and Ruobing Xie and Xingwu Sun and Jiansheng Chen and Zhanhui Kang and Di Wang and Yu Wang }, journal={arXiv preprint arXiv:2506.00534}, year={ 2025 } }