2
0

Feature Hedging: Correlated Features Break Narrow Sparse Autoencoders

Abstract

It is assumed that sparse autoencoders (SAEs) decompose polysemantic activations into interpretable linear directions, as long as the activations are composed of sparse linear combinations of underlying features. However, we find that if an SAE is more narrow than the number of underlying "true features" on which it is trained, and there is correlation between features, the SAE will merge components of correlated features together, thus destroying monosemanticity. In LLM SAEs, these two conditions are almost certainly true. This phenomenon, which we call feature hedging, is caused by SAE reconstruction loss, and is more severe the narrower the SAE. In this work, we introduce the problem of feature hedging and study it both theoretically in toy models and empirically in SAEs trained on LLMs. We suspect that feature hedging may be one of the core reasons that SAEs consistently underperform supervised baselines. Finally, we use our understanding of feature hedging to propose an improved variant of matryoshka SAEs. Our work shows there remain fundamental issues with SAEs, but we are hopeful that that highlighting feature hedging will catalyze future advances that allow SAEs to achieve their full potential of interpreting LLMs at scale.

View on arXiv
@article{chanin2025_2505.11756,
  title={ Feature Hedging: Correlated Features Break Narrow Sparse Autoencoders },
  author={ David Chanin and Tomáš Dulka and Adrià Garriga-Alonso },
  journal={arXiv preprint arXiv:2505.11756},
  year={ 2025 }
}
Comments on this paper