7
0

Déjà Vu: Efficient Video-Language Query Engine with Learning-based Inter-Frame Computation Reuse

Main:3 Pages
17 Figures
Appendix:12 Pages
Abstract

Recently, Video-Language Models (VideoLMs) have demonstrated remarkable capabilities, offering significant potential for flexible and powerful video query systems. These models typically rely on Vision Transformers (ViTs), which process video frames individually to extract visual embeddings. However, generating embeddings for large-scale videos requires ViT inferencing across numerous frames, posing a major hurdle to real-world deployment and necessitating solutions for integration into scalable video data management systems. This paper introduces Déjà Vu, a video-language query engine that accelerates ViT-based VideoLMs by reusing computations across consecutive frames. At its core is ReuseViT, a modified ViT model specifically designed for VideoLM tasks, which learns to detect inter-frame reuse opportunities, striking an effective balance between accuracy and reuse. Although ReuseViT significantly reduces computation, these savings do not directly translate into performance gains on GPUs. To overcome this, Déjà Vu integrates memory-compute joint compaction techniques that convert the FLOP savings into tangible performance gains. Evaluations on three VideoLM tasks show that Déjà Vu accelerates embedding generation by up to a 2.64x within a 2% error bound, dramatically enhancing the practicality of VideoLMs for large-scale video analytics.

View on arXiv
@article{hwang2025_2506.14107,
  title={ Déjà Vu: Efficient Video-Language Query Engine with Learning-based Inter-Frame Computation Reuse },
  author={ Jinwoo Hwang and Daeun Kim and Sangyeop Lee and Yoonsung Kim and Guseul Heo and Hojoon Kim and Yunseok Jeong and Tadiwos Meaza and Eunhyeok Park and Jeongseob Ahn and Jongse Park },
  journal={arXiv preprint arXiv:2506.14107},
  year={ 2025 }
}
Comments on this paper