ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11790
2
0

JULI: Jailbreak Large Language Models by Self-Introspection

17 May 2025
Jesson Wang
Zhanhao Hu
David Wagner
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are trained with safety alignment to prevent generating malicious content. Although some attacks have highlighted vulnerabilities in these safety-aligned LLMs, they typically have limitations, such as necessitating access to the model weights or the generation process. Since proprietary models through API-calling do not grant users such permissions, these attacks find it challenging to compromise them. In this paper, we propose Jailbreaking Using LLM Introspection (JULI), which jailbreaks LLMs by manipulating the token log probabilities, using a tiny plug-in block, BiasNet. JULI relies solely on the knowledge of the target LLM's predicted token log probabilities. It can effectively jailbreak API-calling LLMs under a black-box setting and knowing only top-555 token log probabilities. Our approach demonstrates superior effectiveness, outperforming existing state-of-the-art (SOTA) approaches across multiple metrics.

View on arXiv
@article{wang2025_2505.11790,
  title={ JULI: Jailbreak Large Language Models by Self-Introspection },
  author={ Jesson Wang and Zhanhao Hu and David Wagner },
  journal={arXiv preprint arXiv:2505.11790},
  year={ 2025 }
}
Comments on this paper