9
0

A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models

Abstract

As large vision language models(LVLMs) rapidly advance, concerns about their potential to learn and generate social biases and stereotypes are increasing. Previous studies on LVLM's stereotypes face two primary limitations: metrics that overlooked the importance of content words, and datasets that overlooked the effect of color. To address these limitations, this study introduces new evaluation metrics based on the Stereotype Content Model (SCM). We also propose BASIC, a benchmark for assessing gender, race, and color stereotypes. Using SCM metrics and BASIC, we conduct a study with eight LVLMs to discover stereotypes. As a result, we found three findings. (1) The SCM-based evaluation is effective in capturing stereotypes. (2) LVLMs exhibit color stereotypes in the output along with gender and race ones. (3) Interaction between model architecture and parameter sizes seems to affect stereotypes. We release BASIC publicly on [anonymized for review].

View on arXiv
@article{choi2025_2505.20901,
  title={ A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models },
  author={ Junhyuk Choi and Minju Kim and Yeseon Hong and Bugeun Kim },
  journal={arXiv preprint arXiv:2505.20901},
  year={ 2025 }
}
Comments on this paper