72
0

Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation

Main:7 Pages
2 Figures
Bibliography:3 Pages
7 Tables
Appendix:3 Pages
Abstract

Intent detection, a core component of natural language understanding, has considerably evolved as a crucial mechanism in safeguarding large language models (LLMs). While prior work has applied intent detection to enhance LLMs' moderation guardrails, showing a significant success against content-level jailbreaks, the robustness of these intent-aware guardrails under malicious manipulations remains under-explored. In this work, we investigate the vulnerability of intent-aware guardrails and demonstrate that LLMs exhibit implicit intent detection capabilities. We propose a two-stage intent-based prompt-refinement framework, IntentPrompt, that first transforms harmful inquiries into structured outlines and further reframes them into declarative-style narratives by iteratively optimizing prompts via feedback loops to enhance jailbreak success for red-teaming purposes. Extensive experiments across four public benchmarks and various black-box LLMs indicate that our framework consistently outperforms several cutting-edge jailbreak methods and evades even advanced Intent Analysis (IA) and Chain-of-Thought (CoT)-based defenses. Specifically, our "FSTR+SPIN" variant achieves attack success rates ranging from 88.25% to 96.54% against CoT-based defenses on the o1 model, and from 86.75% to 97.12% on the GPT-4o model under IA-based defenses. These findings highlight a critical weakness in LLMs' safety mechanisms and suggest that intent manipulation poses a growing challenge to content moderation guardrails.

View on arXiv
@article{zhuang2025_2505.18556,
  title={ Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation },
  author={ Jun Zhuang and Haibo Jin and Ye Zhang and Zhengjian Kang and Wenbin Zhang and Gaby G. Dagher and Haohan Wang },
  journal={arXiv preprint arXiv:2505.18556},
  year={ 2025 }
}
Comments on this paper