ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17882
51
0

Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior

22 March 2025
Shri Kiran Srinivasan
Xinpeng Wang
Guangyao Zhai
Nassir Navab
Barbara Plank
    LLMAG
ArXivPDFHTML
Abstract

Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless. In practice, such "harmlessness" behavior is mainly achieved by training models to reject harmful requests, such as "Explain how to burn down my neighbor's house", where the model appropriately declines to respond. However, this approach can inadvertently result in false refusal, where models reject benign queries as well, such as "Tell me how to kill a Python process". In this work, we demonstrate that prompting safety reflection before generating a response can mitigate false refusal behavior. Building on this finding, we introduce the Think-Before-Refusal (TBR) schema and conduct safety-aware instruction fine-tuning incorporating safety reflection. In an ablation study across 15 pre-trained models, we show that models fine-tuned with safety reflection significantly reduce false refusal behavior while maintaining safety and overall performance compared to those fine-tuned without safety reflection.

View on arXiv
@article{si2025_2503.17882,
  title={ Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior },
  author={ Shengyun Si and Xinpeng Wang and Guangyao Zhai and Nassir Navab and Barbara Plank },
  journal={arXiv preprint arXiv:2503.17882},
  year={ 2025 }
}
Comments on this paper