PARC: A Quantitative Framework Uncovering the Symmetries within Vision Language Models
- VLM

Vision language models (VLMs) respond to user-crafted text prompts and visual inputs, and are applied to numerous real-world problems. VLMs integrate visual modalities with large language models (LLMs), which are well known to be prompt-sensitive. Hence, it is crucial to determine whether VLMs inherit this instability to varying prompts. We therefore investigate which prompt variations VLMs are most sensitive to and which VLMs are most agnostic to prompt variations. To this end, we introduce PARC (Prompt Analysis via Reliability and Calibration), a VLM prompt sensitivity analysis framework built on three pillars: (1) plausible prompt variations in both the language and vision domain, (2) a novel model reliability score with built-in guarantees, and (3) a calibration step that enables dataset- and prompt-spanning prompt variation analysis. Regarding prompt variations, PARC's evaluation shows that VLMs mirror LLM language prompt sensitivity in the vision domain, and most destructive variations change the expected answer. Regarding models, outstandingly robust VLMs among 22 evaluated models come from the InternVL2 family. We further find indications that prompt sensitivity is linked to training data. The code will be atthis https URL.
View on arXiv@article{schmalfuss2025_2506.14808, title={ PARC: A Quantitative Framework Uncovering the Symmetries within Vision Language Models }, author={ Jenny Schmalfuss and Nadine Chang and Vibashan VS and Maying Shen and Andres Bruhn and Jose M. Alvarez }, journal={arXiv preprint arXiv:2506.14808}, year={ 2025 } }