48
0

Simple Prompt Injection Attacks Can Leak Personal Data Observed by LLM Agents During Task Execution

Main:9 Pages
20 Figures
Bibliography:4 Pages
9 Tables
Appendix:12 Pages
Abstract

Previous benchmarks on prompt injection in large language models (LLMs) have primarily focused on generic tasks and attacks, offering limited insights into more complex threats like data exfiltration. This paper examines how prompt injection can cause tool-calling agents to leak personal data observed during task execution. Using a fictitious banking agent, we develop data flow-based attacks and integrate them into AgentDojo, a recent benchmark for agentic security. To enhance its scope, we also create a richer synthetic dataset of human-AI banking conversations. In 16 user tasks from AgentDojo, LLMs show a 15-50 percentage point drop in utility under attack, with average attack success rates (ASR) around 20 percent; some defenses reduce ASR to zero. Most LLMs, even when successfully tricked by the attack, avoid leaking highly sensitive data like passwords, likely due to safety alignments, but they remain vulnerable to disclosing other personal data. The likelihood of password leakage increases when a password is requested along with one or two additional personal details. In an extended evaluation across 48 tasks, the average ASR is around 15 percent, with no built-in AgentDojo defense fully preventing leakage. Tasks involving data extraction or authorization workflows, which closely resemble the structure of exfiltration attacks, exhibit the highest ASRs, highlighting the interaction between task type, agent performance, and defense efficacy.

View on arXiv
@article{alizadeh2025_2506.01055,
  title={ Simple Prompt Injection Attacks Can Leak Personal Data Observed by LLM Agents During Task Execution },
  author={ Meysam Alizadeh and Zeynab Samei and Daria Stetsenko and Fabrizio Gilardi },
  journal={arXiv preprint arXiv:2506.01055},
  year={ 2025 }
}
Comments on this paper