75
0

Single-pass Detection of Jailbreaking Input in Large Language Models

Abstract

Defending aligned Large Language Models (LLMs) against jailbreaking attacks is a challenging problem, with existing approaches requiring multiple requests or even queries to auxiliary LLMs, making them computationally heavy. Instead, we focus on detecting jailbreaking input in a single forward pass. Our method, called Single Pass Detection SPD, leverages the information carried by the logits to predict whether the output sentence will be harmful. This allows us to defend in just one forward pass. SPD can not only detect attacks effectively on open-source models, but also minimizes the misclassification of harmless inputs. Furthermore, we show that SPD remains effective even without complete logit access in GPT-3.5 and GPT-4. We believe that our proposed method offers a promising approach to efficiently safeguard LLMs against adversarial attacks.

View on arXiv
@article{candogan2025_2502.15435,
  title={ Single-pass Detection of Jailbreaking Input in Large Language Models },
  author={ Leyla Naz Candogan and Yongtao Wu and Elias Abad Rocamora and Grigorios G. Chrysos and Volkan Cevher },
  journal={arXiv preprint arXiv:2502.15435},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.