This paper introduces MEBench, a novel benchmark for evaluating mutual exclusivity (ME) bias, a cognitive phenomenon observed in children during word learning. Unlike traditional ME tasks, MEBench further incorporates spatial reasoning to create more challenging and realistic evaluation settings. We assess the performance of state-of-the-art vision-language models (VLMs) on this benchmark using novel evaluation metrics that capture key aspects of ME-based reasoning. To facilitate controlled experimentation, we also present a flexible and scalable data generation pipeline that supports the construction of diverse annotated scenes.
View on arXiv@article{thai2025_2505.20122, title={ MEBench: A Novel Benchmark for Understanding Mutual Exclusivity Bias in Vision-Language Models }, author={ Anh Thai and Stefan Stojanov and Zixuan Huang and Bikram Boote and James M. Rehg }, journal={arXiv preprint arXiv:2505.20122}, year={ 2025 } }