ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20841
25
0

Concealment of Intent: A Game-Theoretic Analysis

27 May 2025
Xinbo Wu
A. Umrawal
Lav Varshney
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:3 Pages
6 Tables
Appendix:9 Pages
Abstract

As large language models (LLMs) grow more capable, concerns about their safe deployment have also grown. Although alignment mechanisms have been introduced to deter misuse, they remain vulnerable to carefully designed adversarial prompts. In this work, we present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills. We develop a game-theoretic framework to model the interaction between such attacks and defense systems that apply both prompt and response filtering. Our analysis identifies equilibrium points and reveals structural advantages for the attacker. To counter these threats, we propose and analyze a defense mechanism tailored to intent-hiding attacks. Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors, demonstrating clear advantages over existing adversarial prompting techniques.

View on arXiv
@article{wu2025_2505.20841,
  title={ Concealment of Intent: A Game-Theoretic Analysis },
  author={ Xinbo Wu and Abhishek Umrawal and Lav R. Varshney },
  journal={arXiv preprint arXiv:2505.20841},
  year={ 2025 }
}
Comments on this paper