12
0

Interpretable Risk Mitigation in LLM Agent Systems

Abstract

Autonomous agents powered by large language models (LLMs) enable novel use cases in domains where responsible action is increasingly important. Yet the inherent unpredictability of LLMs raises safety concerns about agent reliability. In this work, we explore agent behaviour in a toy, game-theoretic environment based on a variation of the Iterated Prisoner's Dilemma. We introduce a strategy-modification method-independent of both the game and the prompt-by steering the residual stream with interpretable features extracted from a sparse autoencoder latent space. Steering with the good-faith negotiation feature lowers the average defection probability by 28 percentage points. We also identify feasible steering ranges for several open-source LLM agents. Finally, we hypothesise that game-theoretic evaluation of LLM agents, combined with representation-steering alignment, can generalise to real-world applications on end-user devices and embodied platforms.

View on arXiv
@article{chojnacki2025_2505.10670,
  title={ Interpretable Risk Mitigation in LLM Agent Systems },
  author={ Jan Chojnacki },
  journal={arXiv preprint arXiv:2505.10670},
  year={ 2025 }
}
Comments on this paper