3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model

Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and act in dynamic, multi-room 3D environments. We posit that part of this limitation is due to the lack of proper 3D spatial-temporal memory modeling in LLMs. To address this, we first introduce 3DMem-Bench, a comprehensive benchmark comprising over 26,000 trajectories and 2,892 embodied tasks, question-answering and captioning, designed to evaluate an agent's ability to reason over long-term memory in 3D environments. Second, we propose 3DLLM-Mem, a novel dynamic memory management and fusion model for embodied spatial-temporal reasoning and actions in LLMs. Our model uses working memory tokens, which represents current observations, as queries to selectively attend to and fuse the most useful spatial and temporal features from episodic memory, which stores past observations and interactions. Our approach allows the agent to focus on task-relevant information while maintaining memory efficiency in complex, long-horizon environments. Experimental results demonstrate that 3DLLM-Mem achieves state-of-the-art performance across various tasks, outperforming the strongest baselines by 16.5% in success rate on 3DMem-Bench's most challenging in-the-wild embodied tasks.
View on arXiv@article{hu2025_2505.22657, title={ 3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model }, author={ Wenbo Hu and Yining Hong and Yanjun Wang and Leison Gao and Zibu Wei and Xingcheng Yao and Nanyun Peng and Yonatan Bitton and Idan Szpektor and Kai-Wei Chang }, journal={arXiv preprint arXiv:2505.22657}, year={ 2025 } }