ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19155
16
0

Sparse-to-Dense: A Free Lunch for Lossless Acceleration of Video Understanding in LLMs

25 May 2025
Xuan Zhang
Cunxiao Du
Sicheng Yu
Jiawei Wu
Fengzhuo Zhang
Wei Gao
Qian Liu
ArXivPDFHTML
Abstract

Due to the auto-regressive nature of current video large language models (Video-LLMs), the inference latency increases as the input sequence length grows, posing challenges for the efficient processing of video sequences that are usually very long. We observe that during decoding, the attention scores of most tokens in Video-LLMs tend to be sparse and concentrated, with only certain tokens requiring comprehensive full attention. Based on this insight, we introduce Sparse-to-Dense (StD), a novel decoding strategy that integrates two distinct modules: one leveraging sparse top-K attention and the other employing dense full attention. These modules collaborate to accelerate Video-LLMs without loss. The fast (sparse) model speculatively decodes multiple tokens, while the slow (dense) model verifies them in parallel. StD is a tuning-free, plug-and-play solution that achieves up to a 1.94×\times× walltime speedup in video processing. It maintains model performance while enabling a seamless transition from a standard Video-LLM to a sparse Video-LLM with minimal code modifications.

View on arXiv
@article{zhang2025_2505.19155,
  title={ Sparse-to-Dense: A Free Lunch for Lossless Acceleration of Video Understanding in LLMs },
  author={ Xuan Zhang and Cunxiao Du and Sicheng Yu and Jiawei Wu and Fengzhuo Zhang and Wei Gao and Qian Liu },
  journal={arXiv preprint arXiv:2505.19155},
  year={ 2025 }
}
Comments on this paper