ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07596
12
0

TwinBreak: Jailbreaking LLM Security Alignments based on Twin Prompts

9 June 2025
T. Krauß
Hamid Dashtbani
Alexandra Dmitrienko
ArXiv (abs)PDFHTML
Main:14 Pages
16 Figures
Bibliography:3 Pages
25 Tables
Appendix:9 Pages
Abstract

Machine learning is advancing rapidly, with applications bringing notable benefits, such as improvements in translation and code generation. Models like ChatGPT, powered by Large Language Models (LLMs), are increasingly integrated into daily life. However, alongside these benefits, LLMs also introduce social risks. Malicious users can exploit LLMs by submitting harmful prompts, such as requesting instructions for illegal activities. To mitigate this, models often include a security mechanism that automatically rejects such harmful prompts. However, they can be bypassed through LLM jailbreaks. Current jailbreaks often require significant manual effort, high computational costs, or result in excessive model modifications that may degrade regular utility.We introduce TwinBreak, an innovative safety alignment removal method. Building on the idea that the safety mechanism operates like an embedded backdoor, TwinBreak identifies and prunes parameters responsible for this functionality. By focusing on the most relevant model layers, TwinBreak performs fine-grained analysis of parameters essential to model utility and safety. TwinBreak is the first method to analyze intermediate outputs from prompts with high structural and content similarity to isolate safety parameters. We present the TwinPrompt dataset containing 100 such twin prompts. Experiments confirm TwinBreak's effectiveness, achieving 89% to 98% success rates with minimal computational requirements across 16 LLMs from five vendors.

View on arXiv
@article{krauß2025_2506.07596,
  title={ TwinBreak: Jailbreaking LLM Security Alignments based on Twin Prompts },
  author={ Torsten Krauß and Hamid Dashtbani and Alexandra Dmitrienko },
  journal={arXiv preprint arXiv:2506.07596},
  year={ 2025 }
}
Comments on this paper