ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02438
47
0

Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation

3 April 2025
Chuanqi Cheng
Jian-Yu Guan
Wei Yu Wu
Rui Yan
    VLM
ArXivPDFHTML
Abstract

Long-form video processing fundamentally challenges vision-language models (VLMs) due to the high computational costs of handling extended temporal sequences. Existing token pruning and feature merging methods often sacrifice critical temporal dependencies or dilute semantic information. We introduce differential distillation, a principled approach that systematically preserves task-relevant information while suppressing redundancy. Based on this principle, we develop ViLaMP, a hierarchical video-language model that processes hour-long videos at ``mixed precision'' through two key mechanisms: (1) differential keyframe selection that maximizes query relevance while maintaining temporal distinctiveness at the frame level and (2) differential feature merging that preserves query-salient features in non-keyframes at the patch level. Hence, ViLaMP retains full information in keyframes while reducing non-keyframes to their most salient features, resembling mixed-precision training. Extensive experiments demonstrate ViLaMP's superior performance across four video understanding benchmarks, particularly on long-form content. Notably, ViLaMP can process ultra-long videos (up to 10K frames) on a single NVIDIA A100 GPU, achieving substantial computational efficiency while maintaining state-of-the-art performance.

View on arXiv
@article{cheng2025_2504.02438,
  title={ Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation },
  author={ Chuanqi Cheng and Jian Guan and Wei Wu and Rui Yan },
  journal={arXiv preprint arXiv:2504.02438},
  year={ 2025 }
}
Comments on this paper