64

Towards Multimodal Lifelong Understanding: A Dataset and Agentic Baseline

Guo Chen
Lidong Lu
Yicheng Liu
Liangrui Dong
Lidong Zou
Jixin Lv
Zhenquan Li
Xinyi Mao
Baoqi Pei
Shihao Wang
Zhiqi Li
Karan Sapra
Fuxiao Liu
Yin-Dong Zheng
Yifei Huang
Limin Wang
Zhiding Yu
Andrew Tao
Guilin Liu
Tong Lu
Main:8 Pages
8 Figures
17 Tables
Appendix:21 Pages
Abstract

While datasets for video understanding have scaled to hour-long durations, they typically consist of densely concatenated clips that differ from natural, unscripted daily life. To bridge this gap, we introduce MM-Lifelong, a dataset designed for Multimodal Lifelong Understanding. Comprising 181.1 hours of footage, it is structured across Day, Week, and Month scales to capture varying temporal densities. Extensive evaluations reveal two critical failure modes in current paradigms: end-to-end MLLMs suffer from a Working Memory Bottleneck due to context saturation, while representative agentic baselines experience Global Localization Collapse when navigating sparse, month-long timelines. To address this, we propose the Recursive Multimodal Agent (ReMA), which employs dynamic memory management to iteratively update a recursive belief state, significantly outperforming existing methods. Finally, we establish dataset splits designed to isolate temporal and domain biases, providing a rigorous foundation for future research in supervised learning and out-of-distribution generalization.

View on arXiv
Comments on this paper