2
0

SafeVid: Toward Safety Aligned Video Large Multimodal Models

Abstract

As Video Large Multimodal Models (VLMMs) rapidly advance, their inherent complexity introduces significant safety challenges, particularly the issue of mismatched generalization where static safety alignments fail to transfer to dynamic video contexts. We introduce SafeVid, a framework designed to instill video-specific safety principles in VLMMs. SafeVid uniquely transfers robust textual safety alignment capabilities to the video domain by employing detailed textual video descriptions as an interpretive bridge, facilitating LLM-based rule-driven safety reasoning. This is achieved through a closed-loop system comprising: 1) generation of SafeVid-350K, a novel 350,000-pair video-specific safety preference dataset; 2) targeted alignment of VLMMs using Direct Preference Optimization (DPO); and 3) comprehensive evaluation via our new SafeVidBench benchmark. Alignment with SafeVid-350K significantly enhances VLMM safety, with models like LLaVA-NeXT-Video demonstrating substantial improvements (e.g., up to 42.39%) on SafeVidBench. SafeVid provides critical resources and a structured approach, demonstrating that leveraging textual descriptions as a conduit for safety reasoning markedly improves the safety alignment of VLMMs. We have made SafeVid-350K dataset (this https URL) publicly available.

View on arXiv
@article{wang2025_2505.11926,
  title={ SafeVid: Toward Safety Aligned Video Large Multimodal Models },
  author={ Yixu Wang and Jiaxin Song and Yifeng Gao and Xin Wang and Yang Yao and Yan Teng and Xingjun Ma and Yingchun Wang and Yu-Gang Jiang },
  journal={arXiv preprint arXiv:2505.11926},
  year={ 2025 }
}
Comments on this paper