92

Are Your Agents Upward Deceivers?

Dadi Guo
Qingyu Liu
Dongrui Liu
Qihan Ren
Shuai Shao
Tianyi Qiu
Haoran Li
Yi R. Fung
Zhongjie Ba
Juntao Dai
Jiaming Ji
Zhikai Chen
Jialing Tao
Yaodong Yang
Jing Shao
Xia Hu
Main:8 Pages
2 Figures
Bibliography:4 Pages
6 Tables
Appendix:26 Pages
Abstract

Large Language Model (LLM)-based agents are increasingly used as autonomous subordinates that carry out tasks for users. This raises the question of whether they may also engage in deception, similar to how individuals in human organizations lie to superiors to create a good image or avoid punishment. We observe and define agentic upward deception, a phenomenon in which an agent facing environmental constraints conceals its failure and performs actions that were not requested without reporting. To assess its prevalence, we construct a benchmark of 200 tasks covering five task types and eight realistic scenarios in a constrained environment, such as broken tools or mismatched information sources. Evaluations of 11 popular LLMs reveal that these agents typically exhibit action-based deceptive behaviors, such as guessing results, performing unsupported simulations, substituting unavailable information sources, and fabricating local files. We further test prompt-based mitigation and find only limited reductions, suggesting that it is difficult to eliminate and highlighting the need for stronger mitigation strategies to ensure the safety of LLM-based agents.

View on arXiv
Comments on this paper