85
v1v2 (latest)

Mamba Modulation: On the Length Generalization of Mamba

Main:10 Pages
10 Figures
Bibliography:17 Pages
4 Tables
Appendix:20 Pages
Abstract

The quadratic complexity of the attention mechanism in Transformer models has motivated the development of alternative architectures with sub-quadratic scaling, such as state-space models. Among these, Mamba has emerged as a leading architecture, achieving state-of-the-art results across a range of language modeling tasks. However, Mamba's performance significantly deteriorates when applied to contexts longer than those seen during pre-training, revealing a sharp sensitivity to context length extension. Through detailed analysis, we attribute this limitation to the out-of-distribution behaviour of its state-space dynamics, particularly within the parameterization of the state transition matrix A\mathbf{A}. Unlike recent works which attribute this sensitivity to the vanished accumulation of discretization time steps, exp(t=1NΔt)\exp(-\sum_{t=1}^N\Delta_t), we establish a connection between state convergence behavior as the input length approaches infinity and the spectrum of the transition matrix A\mathbf{A}, offering a well-founded explanation of its role in length extension. Next, to overcome this challenge, we propose an approach that applies spectrum scaling to pre-trained Mamba models to enable robust long-context generalization by selectively modulating the spectrum of A\mathbf{A} matrices in each layer. We show that this can significantly improve performance in settings where simply modulating Δt\Delta_t fails, validating our insights and providing avenues for better length generalization of state-space models with structured transition matrices.

View on arXiv
Comments on this paper