ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08019
63
0

Multi-Cue Adaptive Visual Token Pruning for Large Vision-Language Models

11 March 2025
Bozhi Luan
Wengang Zhou
Hao Feng
Zhe Wang
Xiaosong Li
Hao Li
    VLM
ArXivPDFHTML
Abstract

As the computational needs of Large Vision-Language Models (LVLMs) increase, visual token pruning has proven effective in improving inference speed and memory efficiency. Traditional pruning methods in LVLMs predominantly focus on attention scores to determine token relevance, overlooking critical aspects such as spatial position and token similarity. To this end, we introduce AdaptPrune, a novel plug-and-play training-free pruning method that builds on conventional attention-based pruning by integrating spatial distance and token similarity with an adaptive NMS approach. Our method is based on several observed phenomena in large models: the positional bias in the model's image attention and the redundancy of token information ignored by previous approaches. By integrating attention, spatial, and similarity information, our approach ensures a comprehensive evaluation of token importance and substantially refines the pruning decisions. Our method has been extensively tested across various LVLMs and benchmarks, confirming its robustness and adaptability. The results demonstrate that AdaptPrune consistently outperforms existing methods across various pruning ratios. Code is available atthis https URL.

View on arXiv
@article{luan2025_2503.08019,
  title={ Multi-Cue Adaptive Visual Token Pruning for Large Vision-Language Models },
  author={ Bozhi Luan and Wengang Zhou and Hao Feng and Zhe Wang and Xiaosong Li and Houqiang Li },
  journal={arXiv preprint arXiv:2503.08019},
  year={ 2025 }
}
Comments on this paper