34
1

VSCBench: Bridging the Gap in Vision-Language Model Safety Calibration

Main:8 Pages
5 Figures
Bibliography:3 Pages
5 Tables
Appendix:2 Pages
Abstract

The rapid advancement of vision-language models (VLMs) has brought a lot of attention to their safety alignment. However, existing methods have primarily focused on model undersafety, where the model responds to hazardous queries, while neglecting oversafety, where the model refuses to answer safe queries. In this paper, we introduce the concept of safety calibration\textit{safety calibration}, which systematically addresses both undersafety and oversafety. Specifically, we present VSCBench\textbf{VSCBench}, a novel dataset of 3,600 image-text pairs that are visually or textually similar but differ in terms of safety, which is designed to evaluate safety calibration across image-centric and text-centric scenarios. Based on our benchmark, we evaluate safety calibration across eleven widely used VLMs. Our extensive experiments revealed major issues with both undersafety and oversafety. We further investigated four approaches to improve the model's safety calibration. We found that even though some methods effectively calibrated the models' safety problems, these methods also lead to the degradation of models' utility. This trade-off underscores the urgent need for advanced calibration methods, and our benchmark provides a valuable tool for evaluating future approaches. Our code and data are available atthis https URL.

View on arXiv
@article{geng2025_2505.20362,
  title={ VSCBench: Bridging the Gap in Vision-Language Model Safety Calibration },
  author={ Jiahui Geng and Qing Li and Zongxiong Chen and Yuxia Wang and Derui Zhu and Zhuohan Xie and Chenyang Lyu and Xiuying Chen and Preslav Nakov and Fakhri Karray },
  journal={arXiv preprint arXiv:2505.20362},
  year={ 2025 }
}
Comments on this paper