69
0

XOXO: Stealthy Cross-Origin Context Poisoning Attacks against AI Coding Assistants

Abstract

AI coding assistants are widely used for tasks like code generation. These tools now require large and complex contexts, automatically sourced from various origins\unicodex2014\unicode{x2014}across files, projects, and contributors\unicodex2014\unicode{x2014}forming part of the prompt fed to underlying LLMs. This automatic context-gathering introduces new vulnerabilities, allowing attackers to subtly poison input to compromise the assistant's outputs, potentially generating vulnerable code or introducing critical errors. We propose a novel attack, Cross-Origin Context Poisoning (XOXO), that is challenging to detect as it relies on adversarial code modifications that are semantically equivalent. Traditional program analysis techniques struggle to identify these perturbations since the semantics of the code remains correct, making it appear legitimate. This allows attackers to manipulate coding assistants into producing incorrect outputs, while shifting the blame to the victim developer. We introduce a novel, task-agnostic, black-box attack algorithm GCGS that systematically searches the transformation space using a Cayley Graph, achieving a 75.72% attack success rate on average across five tasks and eleven models, including GPT 4.1 and Claude 3.5 Sonnet v2 used by popular AI coding assistants. Furthermore, defenses like adversarial fine-tuning are ineffective against our attack, underscoring the need for new security measures in LLM-powered coding tools.

View on arXiv
@article{štorek2025_2503.14281,
  title={ XOXO: Stealthy Cross-Origin Context Poisoning Attacks against AI Coding Assistants },
  author={ Adam Štorek and Mukur Gupta and Noopur Bhatt and Aditya Gupta and Janie Kim and Prashast Srivastava and Suman Jana },
  journal={arXiv preprint arXiv:2503.14281},
  year={ 2025 }
}
Comments on this paper