51
2

Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval

Main:8 Pages
9 Figures
Bibliography:2 Pages
3 Tables
Appendix:3 Pages
Abstract

Recent advances in interactive video generation have shown promising results, yet existing approaches struggle with scene-consistent memory capabilities in long video generation due to limited use of historical context. In this work, we propose Context-as-Memory, which utilizes historical context as memory for video generation. It includes two simple yet effective designs: (1) storing context in frame format without additional post-processing; (2) conditioning by concatenating context and frames to be predicted along the frame dimension at the input, requiring no external control modules. Furthermore, considering the enormous computational overhead of incorporating all historical context, we propose the Memory Retrieval module to select truly relevant context frames by determining FOV (Field of View) overlap between camera poses, which significantly reduces the number of candidate frames without substantial information loss. Experiments demonstrate that Context-as-Memory achieves superior memory capabilities in interactive long video generation compared to SOTAs, even generalizing effectively to open-domain scenarios not seen during training. The link of our project page isthis https URL.

View on arXiv
@article{yu2025_2506.03141,
  title={ Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval },
  author={ Jiwen Yu and Jianhong Bai and Yiran Qin and Quande Liu and Xintao Wang and Pengfei Wan and Di Zhang and Xihui Liu },
  journal={arXiv preprint arXiv:2506.03141},
  year={ 2025 }
}
Comments on this paper