57
1

ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos

Abstract

Humans excel at spatial-temporal reasoning, effortlessly interpreting dynamic visual events from an egocentric viewpoint. However, whether multimodal large language models (MLLMs) can similarly understand the 4D world remains uncertain. This paper explores multimodal spatial-temporal reasoning from an egocentric perspective, aiming to equip MLLMs with human-like reasoning capabilities. To support this objective, we introduce \textbf{Ego-ST Bench}, a novel benchmark containing over 5,000 question-answer pairs across four categories, systematically evaluating spatial, temporal, and integrated spatial-temporal reasoning. Additionally, we propose \textbf{ST-R1} training paradigm, a video-based reasoning model that incorporates reverse thinking into its reinforcement learning process, significantly enhancing performance. We combine long-chain-of-thought (long-CoT) supervised fine-tuning with Group Relative Policy Optimization (GRPO) reinforcement learning, achieving notable improvements with limited high-quality data. Ego-ST Bench and ST-R1 provide valuable insights and resources for advancing video-based spatial-temporal reasoning research.

View on arXiv
@article{wu2025_2503.12542,
  title={ ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos },
  author={ Peiran Wu and Yunze Liu and Miao Liu and Junxiao Shen },
  journal={arXiv preprint arXiv:2503.12542},
  year={ 2025 }
}
Comments on this paper