ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11611
4
0

Probing the Vulnerability of Large Language Models to Polysemantic Interventions

16 May 2025
Bofan Gong
Shiyang Lai
Dawn Song
    AAML
    MILM
ArXivPDFHTML
Abstract

Polysemanticity -- where individual neurons encode multiple unrelated features -- is a well-known characteristic of large neural networks and remains a central challenge in the interpretability of language models. At the same time, its implications for model safety are also poorly understood. Leveraging recent advances in sparse autoencoders, we investigate the polysemantic structure of two small models (Pythia-70M and GPT-2-Small) and evaluate their vulnerability to targeted, covert interventions at the prompt, feature, token, and neuron levels. Our analysis reveals a consistent polysemantic topology shared across both models. Strikingly, we demonstrate that this structure can be exploited to mount effective interventions on two larger, black-box instruction-tuned models (LLaMA3.1-8B-Instruct and Gemma-2-9B-Instruct). These findings suggest not only the generalizability of the interventions but also point to a stable and transferable polysemantic structure that could potentially persist across architectures and training regimes.

View on arXiv
@article{gong2025_2505.11611,
  title={ Probing the Vulnerability of Large Language Models to Polysemantic Interventions },
  author={ Bofan Gong and Shiyang Lai and Dawn Song },
  journal={arXiv preprint arXiv:2505.11611},
  year={ 2025 }
}
Comments on this paper