10
0

JailbreaksOverTime: Detecting Jailbreak Attacks Under Distribution Shift

Abstract

Safety and security remain critical concerns in AI deployment. Despite safety training through reinforcement learning with human feedback (RLHF) [ 32], language models remain vulnerable to jailbreak attacks that bypass safety guardrails. Universal jailbreaks - prefixes that can circumvent alignment for any payload - are particularly concerning. We show empirically that jailbreak detection systems face distribution shift, with detectors trained at one point in time performing poorly against newer exploits. To study this problem, we release JailbreaksOverTime, a comprehensive dataset of timestamped real user interactions containing both benign requests and jailbreak attempts collected over 10 months. We propose a two-pronged method for defenders to detect new jailbreaks and continuously update their detectors. First, we show how to use continuous learning to detect jailbreaks and adapt rapidly to new emerging jailbreaks. While detectors trained at a single point in time eventually fail due to drift, we find that universal jailbreaks evolve slowly enough for self-training to be effective. Retraining our detection model weekly using its own labels - with no new human labels - reduces the false negative rate from 4% to 0.3% at a false positive rate of 0.1%. Second, we introduce an unsupervised active monitoring approach to identify novel jailbreaks. Rather than classifying inputs directly, we recognize jailbreaks by their behavior, specifically, their ability to trigger models to respond to known-harmful prompts. This approach has a higher false negative rate (4.1%) than supervised methods, but it successfully identified some out-of-distribution attacks that were missed by the continuous learning approach.

View on arXiv
@article{piet2025_2504.19440,
  title={ JailbreaksOverTime: Detecting Jailbreak Attacks Under Distribution Shift },
  author={ Julien Piet and Xiao Huang and Dennis Jacob and Annabella Chow and Maha Alrashed and Geng Zhao and Zhanhao Hu and Chawin Sitawarin and Basel Alomair and David Wagner },
  journal={arXiv preprint arXiv:2504.19440},
  year={ 2025 }
}
Comments on this paper