Mimicking or Reasoning: Rethinking Multi-Modal In-Context Learning in Vision-Language Models
- ReLMVLMLRM

Vision-language models (VLMs) are widely assumed to exhibit in-context learning (ICL), a property similar to that of their language-only counterparts. While recent work suggests VLMs can perform multimodal ICL (MM-ICL), studies show they often rely on shallow heuristics -- such as copying or majority voting -- rather than true task understanding. We revisit this assumption by evaluating VLMs under distribution shifts, where support examples come from a dataset different from the query. Surprisingly, performance often degrades with more demonstrations, and models tend to copy answers rather than learn from them. To investigate further, we propose a new MM-ICL with Reasoning pipeline that augments each demonstration with a generated rationale alongside the answer. We conduct extensive and comprehensive experiments on both perception- and reasoning-required datasets with open-source VLMs ranging from 3B to 72B and proprietary models such as Gemini 2.0. We conduct controlled studies varying shot count, retrieval method, rationale quality, and distribution. Our results show limited performance sensitivity across these factors, suggesting that current VLMs do not effectively utilize demonstration-level information as intended in MM-ICL.
View on arXiv@article{huang2025_2506.07936, title={ Mimicking or Reasoning: Rethinking Multi-Modal In-Context Learning in Vision-Language Models }, author={ Chengyue Huang and Yuchen Zhu and Sichen Zhu and Jingyun Xiao and Moises Andrade and Shivang Chopra and Zsolt Kira }, journal={arXiv preprint arXiv:2506.07936}, year={ 2025 } }