Top-Down Compression: Revisit Efficient Vision Token Projection for Visual Instruction Tuning

Visual instruction tuning aims to enable large language models to comprehend the visual world, with a pivotal challenge lying in establishing an effective vision-to-language projection. However, existing methods often grapple with the intractable trade-off between accuracy and efficiency. In this paper, we present LLaVA-Meteor, a novel approach designed to break this deadlock, equipped with a novel Top-Down Compression paradigm that strategically compresses visual tokens without compromising core information. Specifically, we construct a trainable Flash Global Fusion module based on efficient selective state space operators, which aligns the feature space while enabling each token to perceive holistic visual context and instruction preference at low cost. Furthermore, a local-to-single scanning manner is employed to effectively capture local dependencies, thereby enhancing the model's capability in vision modeling. To alleviate computational overhead, we explore a Visual-Native Selection mechanism that independently assesses token significance by both the visual and native experts, followed by aggregation to retain the most critical subset. Extensive experiments show that our approach reduces visual tokens by 75--95% while achieving comparable or superior performance across 12 benchmarks, significantly improving efficiency.
View on arXiv@article{li2025_2505.11945, title={ Top-Down Compression: Revisit Efficient Vision Token Projection for Visual Instruction Tuning }, author={ Bonan li and Zicheng Zhang and Songhua Liu and Weihao Yu and Xinchao Wang }, journal={arXiv preprint arXiv:2505.11945}, year={ 2025 } }