101

VISTA-Bench: Do Vision-Language Models Really Understand Visualized Text as Well as Pure Text?

Qingán Liu
Juntong Feng
Yuhao Wang
Xinzhe Han
Yujie Cheng
Yue Zhu
Haiwen Diao
Yunzhi Zhuge
Huchuan Lu
Main:8 Pages
20 Figures
Bibliography:1 Pages
7 Tables
Appendix:18 Pages
Abstract

Vision-Language Models (VLMs) have achieved impressive performance in cross-modal understanding across textual and visual inputs, yet existing benchmarks predominantly focus on pure-text queries. In real-world scenarios, language also frequently appears as visualized text embedded in images, raising the question of whether current VLMs handle such input requests comparably. We introduce VISTA-Bench, a systematic benchmark from multimodal perception, reasoning, to unimodal understanding domains. It evaluates visualized text understanding by contrasting pure-text and visualized-text questions under controlled rendering conditions. Extensive evaluation of over 20 representative VLMs reveals a pronounced modality gap: models that perform well on pure-text queries often degrade substantially when equivalent semantic content is presented as visualized text. This gap is further amplified by increased perceptual difficulty, highlighting sensitivity to rendering variations despite unchanged semantics. Overall, VISTA-Bench provides a principled evaluation framework to diagnose this limitation and to guide progress toward more unified language representations across tokenized text and pixels. The source dataset is available atthis https URL.

View on arXiv
Comments on this paper