12

Interpreting and Steering State-Space Models via Activation Subspace Bottlenecks

Vamshi Sunku Mohan
Kaustubh Gupta
Aneesha Das
Chandan Singh
Main:9 Pages
30 Figures
Bibliography:1 Pages
21 Tables
Appendix:23 Pages
Abstract

State-space models (SSMs) have emerged as an efficient strategy for building powerful language models, avoiding the quadratic complexity of computing attention in transformers. Despite their promise, the interpretability and steerability of modern SSMs remain relatively underexplored. We take a major step in this direction by identifying activation subspace bottlenecks in the Mamba family of SSM models using tools from mechanistic interpretability. We then introduce a test-time steering intervention that simply multiplies the activations of the identified bottlenecks by a scalar. Across 5 SSMs and 6 diverse benchmarks, this intervention improves performance by an average of 8.27%, without requiring any task-specific tuning. Finally, we validate that the identified bottlenecks are indeed hindering performance by modifying them to yield an architecture we call Stable-Mamba, which achieves long-context performance gains when retrained from scratch.

View on arXiv
Comments on this paper