7
0

Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders

Abstract

Large language models (LLMs) are now ubiquitous in user-facing applications, yet they still generate undesirable toxic outputs, including profanity, vulgarity, and derogatory remarks. Although numerous detoxification methods exist, most apply broad, surface-level fixes and can therefore easily be circumvented by jailbreak attacks. In this paper we leverage sparse autoencoders (SAEs) to identify toxicity-related directions in the residual stream of models and perform targeted activation steering using the corresponding decoder vectors. We introduce three tiers of steering aggressiveness and evaluate them on GPT-2 Small and Gemma-2-2B, revealing trade-offs between toxicity reduction and language fluency. At stronger steering strengths, these causal interventions surpass competitive baselines in reducing toxicity by up to 20%, though fluency can degrade noticeably on GPT-2 Small depending on the aggressiveness. Crucially, standard NLP benchmark scores upon steering remain stable, indicating that the model's knowledge and general abilities are preserved. We further show that feature-splitting in wider SAEs hampers safety interventions, underscoring the importance of disentangled feature learning. Our findings highlight both the promise and the current limitations of SAE-based causal interventions for LLM detoxification, further suggesting practical guidelines for safer language-model deployment.

View on arXiv
@article{goyal2025_2505.14536,
  title={ Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders },
  author={ Agam Goyal and Vedant Rathi and William Yeh and Yian Wang and Yuen Chen and Hari Sundaram },
  journal={arXiv preprint arXiv:2505.14536},
  year={ 2025 }
}
Comments on this paper