26
0

Long-Context State-Space Video World Models

Abstract

Video diffusion models have recently shown promise for world modeling through autoregressive frame prediction conditioned on actions. However, they struggle to maintain long-term memory due to the high computational cost associated with processing extended sequences in attention layers. To overcome this limitation, we propose a novel architecture leveraging state-space models (SSMs) to extend temporal memory without compromising computational efficiency. Unlike previous approaches that retrofit SSMs for non-causal vision tasks, our method fully exploits the inherent advantages of SSMs in causal sequence modeling. Central to our design is a block-wise SSM scanning scheme, which strategically trades off spatial consistency for extended temporal memory, combined with dense local attention to ensure coherence between consecutive frames. We evaluate the long-term memory capabilities of our model through spatial retrieval and reasoning tasks over extended horizons. Experiments on Memory Maze and Minecraft datasets demonstrate that our approach surpasses baselines in preserving long-range memory, while maintaining practical inference speeds suitable for interactive applications.

View on arXiv
@article{po2025_2505.20171,
  title={ Long-Context State-Space Video World Models },
  author={ Ryan Po and Yotam Nitzan and Richard Zhang and Berlin Chen and Tri Dao and Eli Shechtman and Gordon Wetzstein and Xun Huang },
  journal={arXiv preprint arXiv:2505.20171},
  year={ 2025 }
}
Comments on this paper