ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08399
28
0
v1v2 (latest)

SafeCoT: Improving VLM Safety with Minimal Reasoning

10 June 2025
Jiachen Ma
Zhanhui Zhou
Chao Yang
Chaochao Lu
    LRM
ArXiv (abs)PDFHTML
Abstract

Ensuring safe and appropriate responses from vision-language models (VLMs) remains a critical challenge, particularly in high-risk or ambiguous scenarios. We introduce SafeCoT, a lightweight, interpretable framework that leverages rule-based chain-of-thought (CoT) supervision to improve refusal behavior in VLMs. Unlike prior methods that rely on large-scale safety annotations or complex modeling, SafeCoT uses minimal supervision to help models reason about safety risks and make context-aware refusals. Experiments across multiple benchmarks show that SafeCoT significantly reduces overrefusal and enhances generalization, even with limited training data. Our approach offers a scalable solution for aligning VLMs with safety-critical objectives.

View on arXiv
@article{ma2025_2506.08399,
  title={ SafeCoT: Improving VLM Safety with Minimal Reasoning },
  author={ Jiachen Ma and Zhanhui Zhou and Chao Yang and Chaochao Lu },
  journal={arXiv preprint arXiv:2506.08399},
  year={ 2025 }
}
Main:4 Pages
1 Figures
Bibliography:2 Pages
8 Tables
Appendix:8 Pages
Comments on this paper