ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18422
72
1

Breaking the Encoder Barrier for Seamless Video-Language Understanding

24 March 2025
Handong Li
Yiyuan Zhang
Longteng Guo
Xiangyu Yue
Jing Liu
    VLM
ArXivPDFHTML
Abstract

Most Video-Large Language Models (Video-LLMs) adopt an encoder-decoder framework, where a vision encoder extracts frame-wise features for processing by a language model. However, this approach incurs high computational costs, introduces resolution biases, and struggles to capture fine-grained multimodal interactions. To overcome these limitations, we propose ELVA, an encoder-free Video-LLM that directly models nuanced video-language interactions without relying on a vision encoder. ELVA employs token merging to construct a bottom-up hierarchical representation and incorporates a video guidance supervisor for direct spatiotemporal representation learning. Additionally, a hybrid-resolution mechanism strategically integrates high- and low-resolution frames as inputs to achieve an optimal balance between performance and efficiency. With only 7M publicly available video-text pairs, ELVA achieves performance on par with encoder-based Video-LLMs while reducing FLOPs by up to 95\% and inference latency by 92\%, offering a scalable and efficient solution for real-time video understanding.

View on arXiv
@article{li2025_2503.18422,
  title={ Breaking the Encoder Barrier for Seamless Video-Language Understanding },
  author={ Handong Li and Yiyuan Zhang and Longteng Guo and Xiangyu Yue and Jing Liu },
  journal={arXiv preprint arXiv:2503.18422},
  year={ 2025 }
}
Comments on this paper