51
0

AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents

Main:9 Pages
12 Figures
Bibliography:3 Pages
12 Tables
Appendix:21 Pages
Abstract

As Large Language Model (LLM) agents become more widespread, associated misalignment risks increase. Prior work has examined agents' ability to enact misaligned behaviour (misalignment capability) and their compliance with harmful instructions (misuse propensity). However, the likelihood of agents attempting misaligned behaviours in real-world settings (misalignment propensity) remains poorly understood. We introduce a misalignment propensity benchmark, AgentMisalignment, consisting of a suite of realistic scenarios in which LLM agents have the opportunity to display misaligned behaviour. We organise our evaluations into subcategories of misaligned behaviours, including goal-guarding, resisting shutdown, sandbagging, and power-seeking. We report the performance of frontier models on our benchmark, observing higher misalignment on average when evaluating more capable models. Finally, we systematically vary agent personalities through different system prompts. We find that persona characteristics can dramatically and unpredictably influence misalignment tendencies -- occasionally far more than the choice of model itself -- highlighting the importance of careful system prompt engineering for deployed AI agents. Our work highlights the failure of current alignment methods to generalise to LLM agents, and underscores the need for further propensity evaluations as autonomous systems become more prevalent.

View on arXiv
@article{naik2025_2506.04018,
  title={ AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents },
  author={ Akshat Naik and Patrick Quinn and Guillermo Bosch and Emma Gouné and Francisco Javier Campos Zabala and Jason Ross Brown and Edward James Young },
  journal={arXiv preprint arXiv:2506.04018},
  year={ 2025 }
}
Comments on this paper