ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05284
123
0

Video World Models with Long-term Spatial Memory

5 June 2025
Tong Wu
Shuai Yang
Ryan Po
Yinghao Xu
Ziwei Liu
Dahua Lin
Gordon Wetzstein
    VGenKELMVLM
ArXiv (abs)PDFHTML
Abstract

Emerging world models autoregressively generate video frames in response to actions, such as camera movements and text prompts, among other control signals. Due to limited temporal context window sizes, these models often struggle to maintain scene consistency during revisits, leading to severe forgetting of previously generated environments. Inspired by the mechanisms of human memory, we introduce a novel framework to enhancing long-term consistency of video world models through a geometry-grounded long-term spatial memory. Our framework includes mechanisms to store and retrieve information from the long-term spatial memory and we curate custom datasets to train and evaluate world models with explicitly stored 3D memory mechanisms. Our evaluations show improved quality, consistency, and context length compared to relevant baselines, paving the way towards long-term consistent world generation.

View on arXiv
@article{wu2025_2506.05284,
  title={ Video World Models with Long-term Spatial Memory },
  author={ Tong Wu and Shuai Yang and Ryan Po and Yinghao Xu and Ziwei Liu and Dahua Lin and Gordon Wetzstein },
  journal={arXiv preprint arXiv:2506.05284},
  year={ 2025 }
}
Comments on this paper