Despite substantial advancements in aligning large language models (LLMs) with human values, current safety mechanisms remain susceptible to jailbreak attacks. We hypothesize that this vulnerability stems from distributional discrepancies between alignment-oriented prompts and malicious prompts. To investigate this, we introduce LogiBreak, a novel and universal black-box jailbreak method that leverages logical expression translation to circumvent LLM safety systems. By converting harmful natural language prompts into formal logical expressions, LogiBreak exploits the distributional gap between alignment data and logic-based inputs, preserving the underlying semantic intent and readability while evading safety constraints. We evaluate LogiBreak on a multilingual jailbreak dataset spanning three languages, demonstrating its effectiveness across various evaluation settings and linguistic contexts.
View on arXiv@article{peng2025_2505.13527, title={ Logic Jailbreak: Efficiently Unlocking LLM Safety Restrictions Through Formal Logical Expression }, author={ Jingyu Peng and Maolin Wang and Nan Wang and Xiangyu Zhao and Jiatong Li and Kai Zhang and Qi Liu }, journal={arXiv preprint arXiv:2505.13527}, year={ 2025 } }