Diagnosing Vision Language Models' Perception by Leveraging Human Methods for Color Vision Deficiencies

Large-scale Vision Language Models (LVLMs) are increasingly being applied to a wide range of real-world multimodal applications, involving complex visual and linguistic reasoning. As these models become more integrated into practical use, they are expected to handle complex aspects of human interaction. Among these, color perception is a fundamental yet highly variable aspect of visual understanding. It differs across individuals due to biological factors such as Color Vision Deficiencies (CVDs), as well as differences in culture and language. Despite its importance, perceptual diversity has received limited attention. In our study, we evaluate LVLMs' ability to account for individual level perceptual variation using the Ishihara Test, a widely used method for detecting CVDs. Our results show that LVLMs can explain CVDs in natural language, but they cannot simulate how people with CVDs perceive color in image based tasks. These findings highlight the need for multimodal systems that can account for color perceptual diversity and support broader discussions on perceptual inclusiveness and fairness in multimodal AI.
View on arXiv@article{hayashi2025_2505.17461, title={ Diagnosing Vision Language Models' Perception by Leveraging Human Methods for Color Vision Deficiencies }, author={ Kazuki Hayashi and Shintaro Ozaki and Yusuke Sakai and Hidetaka Kamigaito and Taro Watanabe }, journal={arXiv preprint arXiv:2505.17461}, year={ 2025 } }