Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders

Multilayer perceptrons (MLPs) are an integral part of large language models, yet their dense representations render them difficult to understand, edit, and steer. Recent methods learn interpretable approximations via neuron-level sparsity, yet fail to faithfully reconstruct the original mapping--significantly increasing model's next-token cross-entropy loss. In this paper, we advocate for moving to layer-level sparsity to overcome the accuracy trade-off in sparse layer approximation. Under this paradigm, we introduce Mixture of Decoders (MxDs). MxDs generalize MLPs and Gated Linear Units, expanding pre-trained dense layers into tens of thousands of specialized sublayers. Through a flexible form of tensor factorization, each sparsely activating MxD sublayer implements a linear transformation with full-rank weights--preserving the original decoders' expressive capacity even under heavy sparsity. Experimentally, we show that MxDs significantly outperform state-of-the-art methods (e.g., Transcoders) on the sparsity-accuracy frontier in language models with up to 3B parameters. Further evaluations on sparse probing and feature steering demonstrate that MxDs learn similarly specialized features of natural language--opening up a promising new avenue for designing interpretable yet faithful decompositions. Our code is included at:this https URL.
View on arXiv@article{oldfield2025_2505.21364, title={ Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders }, author={ James Oldfield and Shawn Im and Yixuan Li and Mihalis A. Nicolaou and Ioannis Patras and Grigorios G Chrysos }, journal={arXiv preprint arXiv:2505.21364}, year={ 2025 } }