8
0

LaVi: Efficient Large Vision-Language Models via Internal Feature Modulation

Main:9 Pages
10 Figures
Bibliography:5 Pages
10 Tables
Appendix:3 Pages
Abstract

Despite the impressive advancements of Large Vision-Language Models (LVLMs), existing approaches suffer from a fundamental bottleneck: inefficient visual-language integration. Current methods either disrupt the model's inherent structure or introduce severe long-context computational burden, severely limiting scalability and efficiency. In this paper, we rethink multimodal integration and present LaVi, a novel LVLM that enables seamless and efficient vision-language fusion through internal feature modulation within the Large Language Models (LLMs). Unlike dominant LVLMs that rely on visual token concatenation, LaVi bypasses long-context expansion by introducing a lightweight and adaptive transformation, which incorporates visual context by injecting token-wise vision-conditioned deltas into the affine parameters of layer normalization. This mechanism directly modulates linguistic hidden states based on visual input, ensuring precise vision-language alignment while preserving the LLM's linguistic priors and drastically reducing computational costs. Extensive evaluations across 15 image and video benchmarks demonstrate that LaVi not only achieves state-of-the-art multimodal performance but also dramatically enhances efficiency. Compared to LLaVA-OV-7B, LaVi reduces FLOPs by 94.0%, improves inference speed by 3.1 times, and cuts memory usage in half - establishing LaVi as a scalable and practical solution for real-time multimodal reasoning. The code and models will be released soon.

View on arXiv
@article{yue2025_2506.16691,
  title={ LaVi: Efficient Large Vision-Language Models via Internal Feature Modulation },
  author={ Tongtian Yue and Longteng Guo and Yepeng Tang and Zijia Zhao and Xinxin Zhu and Hua Huang and Jing Liu },
  journal={arXiv preprint arXiv:2506.16691},
  year={ 2025 }
}
Comments on this paper