ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05344
170
0
v1v2 (latest)

SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs

5 June 2025
Jiahui Wang
Z. Liu
Yongming Rao
Jiwen Lu
    VLMLRM
ArXiv (abs)PDFHTML
Main:10 Pages
12 Figures
Bibliography:2 Pages
6 Tables
Appendix:1 Pages
Abstract

Multimodal Large Language Models (MLLMs) are commonly derived by extending pre-trained Large Language Models (LLMs) with visual capabilities. In this work, we investigate how MLLMs process visual inputs by analyzing their attention mechanisms. We reveal a surprising sparsity phenomenon: only a small subset (approximately less than 5%) of attention heads in LLMs actively contribute to visual understanding, termed visual heads. To identify these heads efficiently, we design a training-free framework that quantifies head-level visual relevance through targeted response analysis. Building on this discovery, we introduce SparseMM, a KV-Cache optimization strategy that allocates asymmetric computation budgets to heads in LLMs based on their visual scores, leveraging the sparity of visual heads for accelerating the inference of MLLMs. Compared with prior KV-Cache acceleration methods that ignore the particularity of visual, SparseMM prioritizes stress and retaining visual semantics during decoding. Extensive evaluations across mainstream multimodal benchmarks demonstrate that SparseMM achieves superior accuracy-efficiency trade-offs. Notably, SparseMM delivers 1.38x real-time acceleration and 52% memory reduction during generation while maintaining performance parity on efficiency test. Our project is open sourced at this https URL.

View on arXiv
@article{wang2025_2506.05344,
  title={ SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs },
  author={ Jiahui Wang and Zuyan Liu and Yongming Rao and Jiwen Lu },
  journal={arXiv preprint arXiv:2506.05344},
  year={ 2025 }
}
Comments on this paper