Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos
- VLM

This work presents Sa2VA, the first unified model for dense grounded understanding of both images and videos. Unlike existing multi-modal large language models, which are often limited to specific modalities and tasks, Sa2VA supports a wide range of image and video tasks, including referring segmentation and conversation, with minimal one-shot instruction tuning. Sa2VA combines SAM-2, a foundation video segmentation model, with LLaVA, an advanced vision-language model, and unifies text, image, and video into a shared LLM token space. Using the LLM, Sa2VA generates instruction tokens that guide SAM-2 in producing precise masks, enabling a grounded, multi-modal understanding of both static and dynamic visual content. Additionally, we introduce Ref-SAV, an auto-labeled dataset containing over 72k object expressions in complex video scenes, designed to boost model performance. We also manually validate 2k video objects in the Ref-SAV datasets to benchmark referring video object segmentation in complex environments. Experiments show that Sa2VA achieves state-of-the-art across multiple tasks, particularly in referring video object segmentation, highlighting its potential for complex real-world applications.
View on arXiv@article{yuan2025_2501.04001, title={ Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos }, author={ Haobo Yuan and Xiangtai Li and Tao Zhang and Zilong Huang and Shilin Xu and Shunping Ji and Yunhai Tong and Lu Qi and Jiashi Feng and Ming-Hsuan Yang }, journal={arXiv preprint arXiv:2501.04001}, year={ 2025 } }