45
0

Overcoming Multi-step Complexity in Multimodal Theory-of-Mind Reasoning: A Scalable Bayesian Planner

Main:9 Pages
7 Figures
Bibliography:4 Pages
14 Tables
Appendix:10 Pages
Abstract

Theory-of-Mind (ToM) enables humans to infer mental states-such as beliefs, desires, and intentions-forming the foundation of social cognition. However, existing computational ToM methods rely on structured workflows with ToM-specific priors or deep model fine-tuning, which struggle with scalability in multimodal environments and fail to generalize as task complexity increases. To address these limitations, we propose a scalable Bayesian ToM planner that decomposes ToM reasoning into stepwise Bayesian updates. Our framework introduces weak-to-strong control, allowing smaller language models (LMs) to specialize in ToM-specific likelihood estimation and transfer their reasoning behaviors to larger LMs (7B to 405B) for integration with social and world knowledge. This synergistic approach aligns large-model inference of human mental states with Bayesian principles. Extensive experiments show that our method achieves a 4.6% accuracy improvement over state-of-the-art techniques on multimodal ToM benchmarks, including challenging unseen scenarios, thereby establishing a new standard for modeling human mental states in complex environments.

View on arXiv
@article{zhang2025_2506.01301,
  title={ Overcoming Multi-step Complexity in Multimodal Theory-of-Mind Reasoning: A Scalable Bayesian Planner },
  author={ Chunhui Zhang and Zhongyu Ouyang and Kwonjoon Lee and Nakul Agarwal and Sean Dae Houlihan and Soroush Vosoughi and Shao-Yuan Lo },
  journal={arXiv preprint arXiv:2506.01301},
  year={ 2025 }
}
Comments on this paper