SafeCoT: Improving VLM Safety with Minimal Reasoning
- LRM
Ensuring safe and appropriate responses from vision-language models (VLMs) remains a critical challenge, particularly in high-risk or ambiguous scenarios. We introduce SafeCoT, a lightweight, interpretable framework that leverages rule-based chain-of-thought (CoT) supervision to improve refusal behavior in VLMs. Unlike prior methods that rely on large-scale safety annotations or complex modeling, SafeCoT uses minimal supervision to help models reason about safety risks and make context-aware refusals. Experiments across multiple benchmarks show that SafeCoT significantly reduces overrefusal and enhances generalization, even with limited training data. Our approach offers a scalable solution for aligning VLMs with safety-critical objectives.
View on arXiv@article{ma2025_2506.08399, title={ SafeCoT: Improving VLM Safety with Minimal Reasoning }, author={ Jiachen Ma and Zhanhui Zhou and Chao Yang and Chaochao Lu }, journal={arXiv preprint arXiv:2506.08399}, year={ 2025 } }