Temporal-Oriented Recipe for Transferring Large Vision-Language Model to Video Understanding

Recent years have witnessed outstanding advances of large vision-language models (LVLMs). In order to tackle video understanding, most of them depend upon their implicit temporal understanding capacity. As such, they have not deciphered important components that contribute to temporal understanding ability, which might limit the potential of these LVLMs for video understanding. In this work, we conduct a thorough empirical study to demystify crucial components that influence the temporal understanding of LVLMs. Our empirical study reveals that significant impacts are centered around the intermediate interface between the visual encoder and the large language model. Building on these insights, we propose a temporal-oriented recipe that encompasses temporal-oriented training schemes and an upscaled interface. Our final model developed using our recipe significantly enhances previous LVLMs on standard video understanding tasks.
View on arXiv@article{nguyen2025_2505.12605, title={ Temporal-Oriented Recipe for Transferring Large Vision-Language Model to Video Understanding }, author={ Thong Nguyen and Zhiyuan Hu and Xu Lin and Cong-Duy Nguyen and See-Kiong Ng and Luu Anh Tuan }, journal={arXiv preprint arXiv:2505.12605}, year={ 2025 } }