ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16150
12
0

PRISON: Unmasking the Criminal Potential of Large Language Models

19 June 2025
Xinyi Wu
Geng Hong
Pei Chen
Yueyue Chen
Xudong Pan
Min Yang
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:3 Pages
12 Tables
Appendix:26 Pages
Abstract

As large language models (LLMs) advance, concerns about their misconduct in complex social contexts intensify. Existing research overlooked the systematic understanding and assessment of their criminal capability in realistic interactions. We propose a unified framework PRISON, to quantify LLMs' criminal potential across five dimensions: False Statements, Frame-Up, Psychological Manipulation, Emotional Disguise, and Moral Disengagement. Using structured crime scenarios adapted from classic films, we evaluate both criminal potential and anti-crime ability of LLMs via role-play. Results show that state-of-the-art LLMs frequently exhibit emergent criminal tendencies, such as proposing misleading statements or evasion tactics, even without explicit instructions. Moreover, when placed in a detective role, models recognize deceptive behavior with only 41% accuracy on average, revealing a striking mismatch between conducting and detecting criminal behavior. These findings underscore the urgent need for adversarial robustness, behavioral alignment, and safety mechanisms before broader LLM deployment.

View on arXiv
@article{wu2025_2506.16150,
  title={ PRISON: Unmasking the Criminal Potential of Large Language Models },
  author={ Xinyi Wu and Geng Hong and Pei Chen and Yueyue Chen and Xudong Pan and Min Yang },
  journal={arXiv preprint arXiv:2506.16150},
  year={ 2025 }
}
Comments on this paper