What's in the Box? Reasoning about Unseen Objects from Multimodal Cues
- OCL

People regularly make inferences about objects in the world that they cannot see by flexibly integrating information from multiple sources: auditory and visual cues, language, and our prior beliefs and knowledge about the scene. How are we able to so flexibly integrate many sources of information to make sense of the world around us, even if we have no direct knowledge? In this work, we propose a neurosymbolic model that uses neural networks to parse open-ended multimodal inputs and then applies a Bayesian model to integrate different sources of information to evaluate different hypotheses. We evaluate our model with a novel object guessing game called ``What's in the Box?'' where humans and models watch a video clip of an experimenter shaking boxes and then try to guess the objects inside the boxes. Through a human experiment, we show that our model correlates strongly with human judgments, whereas unimodal ablated models and large multimodal neural model baselines show poor correlation.
View on arXiv@article{ying2025_2506.14212, title={ What's in the Box? Reasoning about Unseen Objects from Multimodal Cues }, author={ Lance Ying and Daniel Xu and Alicia Zhang and Katherine M. Collins and Max H. Siegel and Joshua B. Tenenbaum }, journal={arXiv preprint arXiv:2506.14212}, year={ 2025 } }