ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23404
22
0
v1v2 (latest)

Adaptive Jailbreaking Strategies Based on the Semantic Understanding Capabilities of Large Language Models

29 May 2025
Mingyu Yu
Wei Wang
Y. X. Wei
Sujuan Qin
    AAML
ArXiv (abs)PDFHTML
Main:16 Pages
4 Figures
Bibliography:2 Pages
4 Tables
Abstract

Adversarial attacks on Large Language Models (LLMs) via jailbreaking techniques-methods that circumvent their built-in safety and ethical constraints-have emerged as a critical challenge in AI security. These attacks compromise the reliability of LLMs by exploiting inherent weaknesses in their comprehension capabilities. This paper investigates the efficacy of jailbreaking strategies that are specifically adapted to the diverse levels of understanding exhibited by different LLMs. We propose the Adaptive Jailbreaking Strategies Based on the Semantic Understanding Capabilities of Large Language Models, a novel framework that classifies LLMs into Type I and Type II categories according to their semantic comprehension abilities. For each category, we design tailored jailbreaking strategies aimed at leveraging their vulnerabilities to facilitate successful attacks. Extensive experiments conducted on multiple LLMs demonstrate that our adaptive strategy markedly improves the success rate of jailbreaking. Notably, our approach achieves an exceptional 98.9% success rate in jailbreaking GPT-4o(29 May 2025 release)

View on arXiv
@article{yu2025_2505.23404,
  title={ Adaptive Jailbreaking Strategies Based on the Semantic Understanding Capabilities of Large Language Models },
  author={ Mingyu Yu and Wei Wang and Yanjie Wei and Sujuan Qin },
  journal={arXiv preprint arXiv:2505.23404},
  year={ 2025 }
}
Comments on this paper