ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21784
22
0

Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation

27 May 2025
Tharindu Kumarage
Ninareh Mehrabi
Anil Ramakrishna
Xinyan Zhao
R. Zemel
Kai-Wei Chang
Aram Galstyan
Rahul Gupta
Charith Peris
    LRM
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:2 Pages
5 Tables
Appendix:11 Pages
Abstract

Safety reasoning is a recent paradigm where LLMs reason over safety policies before generating responses, thereby mitigating limitations in existing safety measures such as over-refusal and jailbreak vulnerabilities. However, implementing this paradigm is challenging due to the resource-intensive process of creating high-quality policy-embedded chain-of-thought (CoT) datasets while ensuring reasoning remains accurate and free from hallucinations or policy conflicts. To tackle this, we propose AIDSAFE: Agentic Iterative Deliberation for Safety Reasoning, a novel data generation recipe that leverages multi-agent deliberation to iteratively expand reasoning on safety policies. A data refiner stage in AIDSAFE ensures high-quality outputs by eliminating repetitive, redundant, and deceptive thoughts. AIDSAFE-generated CoTs provide a strong foundation for supervised fine-tuning (SFT)-based safety training. Additionally, to address the need of preference data in alignment stages, such as DPO training, we introduce a supplemental recipe that uses belief augmentation to create distinct selected and rejected CoT samples. Our evaluations demonstrate that AIDSAFE-generated CoTs achieve superior policy adherence and reasoning quality. Consequently, we show that fine-tuning open-source LLMs on these CoTs can significantly improve safety generalization and jailbreak robustness while maintaining acceptable utility and over-refusal accuracy. AIDSAFE-generated CoT datasets can be found here:this https URL

View on arXiv
@article{kumarage2025_2505.21784,
  title={ Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation },
  author={ Tharindu Kumarage and Ninareh Mehrabi and Anil Ramakrishna and Xinyan Zhao and Richard Zemel and Kai-Wei Chang and Aram Galstyan and Rahul Gupta and Charith Peris },
  journal={arXiv preprint arXiv:2505.21784},
  year={ 2025 }
}
Comments on this paper