Revisiting Bi-Linear State Transitions in Recurrent Neural Networks

The role of hidden units in recurrent neural networks is typically seen as modeling memory, with research focusing on enhancing information retention through gating mechanisms. A less explored perspective views hidden units as active participants in the computation performed by the network, rather than passive memory stores. In this work, we revisit bi-linear operations, which involve multiplicative interactions between hidden units and input embeddings. We demonstrate theoretically and empirically that they constitute a natural inductive bias for representing the evolution of hidden states in state tracking tasks. These are the simplest type of task that require hidden units to actively contribute to the behavior of the network. We also show that bi-linear state updates form a natural hierarchy corresponding to state tracking tasks of increasing complexity, with popular linear recurrent networks such as Mamba residing at the lowest-complexity center of that hierarchy.
View on arXiv@article{ebrahimi2025_2505.21749, title={ Revisiting Bi-Linear State Transitions in Recurrent Neural Networks }, author={ M.Reza Ebrahimi and Roland Memisevic }, journal={arXiv preprint arXiv:2505.21749}, year={ 2025 } }