65
1

AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents

Abstract

Autonomous AI agents that can follow instructions and perform complex multi-step tasks have tremendous potential to boost human productivity. However, to perform many of these tasks, the agents need access to personal information from their users, raising the question of whether they are capable of using it appropriately. In this work, we introduce a new benchmark AgentDAM that measures if AI web-navigation agents follow the privacy principle of ``data minimization''. For the purposes of our benchmark, data minimization means that the agent uses a piece of potentially sensitive information only if it is ``necessary'' to complete a particular task. Our benchmark simulates realistic web interaction scenarios end-to-end and is adaptable to all existing web navigation agents. We use AgentDAM to evaluate how well AI agents built on top of GPT-4, Llama-3 and Claude can limit processing of potentially private information, and show that they are prone to inadvertent use of unnecessary sensitive information. We also propose a prompting-based defense that reduces information leakage, and demonstrate that our end-to-end benchmarking provides a more realistic measure than probing LLMs about privacy. Our results highlight that further research is needed to develop AI agents that can prioritize data minimization at inference time.

View on arXiv
@article{zharmagambetov2025_2503.09780,
  title={ AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents },
  author={ Arman Zharmagambetov and Chuan Guo and Ivan Evtimov and Maya Pavlova and Ruslan Salakhutdinov and Kamalika Chaudhuri },
  journal={arXiv preprint arXiv:2503.09780},
  year={ 2025 }
}
Comments on this paper