ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13527
2
0

Logic Jailbreak: Efficiently Unlocking LLM Safety Restrictions Through Formal Logical Expression

18 May 2025
Jingyu Peng
Maolin Wang
Nan Wang
Xiangyu Zhao
Jiatong Li
Kai Zhang
Qi Liu
ArXivPDFHTML
Abstract

Despite substantial advancements in aligning large language models (LLMs) with human values, current safety mechanisms remain susceptible to jailbreak attacks. We hypothesize that this vulnerability stems from distributional discrepancies between alignment-oriented prompts and malicious prompts. To investigate this, we introduce LogiBreak, a novel and universal black-box jailbreak method that leverages logical expression translation to circumvent LLM safety systems. By converting harmful natural language prompts into formal logical expressions, LogiBreak exploits the distributional gap between alignment data and logic-based inputs, preserving the underlying semantic intent and readability while evading safety constraints. We evaluate LogiBreak on a multilingual jailbreak dataset spanning three languages, demonstrating its effectiveness across various evaluation settings and linguistic contexts.

View on arXiv
@article{peng2025_2505.13527,
  title={ Logic Jailbreak: Efficiently Unlocking LLM Safety Restrictions Through Formal Logical Expression },
  author={ Jingyu Peng and Maolin Wang and Nan Wang and Xiangyu Zhao and Jiatong Li and Kai Zhang and Qi Liu },
  journal={arXiv preprint arXiv:2505.13527},
  year={ 2025 }
}
Comments on this paper