24
0

Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs

Abstract

Recent studies demonstrate that Large Language Models (LLMs) are vulnerable to different prompt-based attacks, generating harmful content or sensitive information. Both closed-source and open-source LLMs are underinvestigated for these attacks. This paper studies effective prompt injection attacks against the 14\mathbf{14} most popular open-source LLMs on five attack benchmarks. Current metrics only consider successful attacks, whereas our proposed Attack Success Probability (ASP) also captures uncertainty in the model's response, reflecting ambiguity in attack feasibility. By comprehensively analyzing the effectiveness of prompt injection attacks, we propose a simple and effective hypnotism attack; results show that this attack causes aligned language models, including Stablelm2, Mistral, Openchat, and Vicuna, to generate objectionable behaviors, achieving around 9090% ASP. They also indicate that our ignore prefix attacks can break all 14\mathbf{14} open-source LLMs, achieving over 6060% ASP on a multi-categorical dataset. We find that moderately well-known LLMs exhibit higher vulnerability to prompt injection attacks, highlighting the need to raise public awareness and prioritize efficient mitigation strategies.

View on arXiv
@article{wang2025_2505.14368,
  title={ Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs },
  author={ Jiawen Wang and Pritha Gupta and Ivan Habernal and Eyke Hüllermeier },
  journal={arXiv preprint arXiv:2505.14368},
  year={ 2025 }
}
Comments on this paper