46
0

Investigating Mechanisms for In-Context Vision Language Binding

Main:4 Pages
6 Figures
Bibliography:1 Pages
1 Tables
Abstract

To understand a prompt, Vision-Language models (VLMs) must perceive the image, comprehend the text, and build associations within and across both modalities. For instance, given an ímage of a red toy car', the model should associate this image to phrases like 'car', 'red toy', 'red object', etc. Feng and Steinhardt propose the Binding ID mechanism in LLMs, suggesting that the entity and its corresponding attribute tokens share a Binding ID in the model activations. We investigate this for image-text binding in VLMs using a synthetic dataset and task that requires models to associate 3D objects in an image with their descriptions in the text. Our experiments demonstrate that VLMs assign a distinct Binding ID to an object's image tokens and its textual references, enabling in-context association.

View on arXiv
@article{saravanan2025_2505.22200,
  title={ Investigating Mechanisms for In-Context Vision Language Binding },
  author={ Darshana Saravanan and Makarand Tapaswi and Vineet Gandhi },
  journal={arXiv preprint arXiv:2505.22200},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.