We study suffix-based jailbreaksa powerful family of attacks against large language models (LLMs) that optimize adversarial suffixes to circumvent safety alignment. Focusing on the widely used foundational GCG attack (Zou et al., 2023), we observe that suffixes vary in efficacy: some markedly more universalgeneralizing to many unseen harmful instructionsthan others. We first show that GCG's effectiveness is driven by a shallow, critical mechanism, built on the information flow from the adversarial suffix to the final chat template tokens before generation. Quantifying the dominance of this mechanism during generation, we find GCG irregularly and aggressively hijacks the contextualization process. Crucially, we tie hijacking to the universality phenomenon, with more universal suffixes being stronger hijackers. Subsequently, we show that these insights have practical implications: GCG universality can be efficiently enhanced (up to 5 in some cases) at no additional computational cost, and can also be surgically mitigated, at least halving attack success with minimal utility loss. We release our code and data atthis http URL.
View on arXiv@article{ben-tov2025_2506.12880, title={ Universal Jailbreak Suffixes Are Strong Attention Hijackers }, author={ Matan Ben-Tov and Mor Geva and Mahmood Sharif }, journal={arXiv preprint arXiv:2506.12880}, year={ 2025 } }