ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20901
  4. Cited By
A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models

A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models

27 May 2025
Junhyuk Choi
Minju Kim
Yeseon Hong
Bugeun Kim
ArXiv (abs)PDFHTML

Papers citing "A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models"

22 / 22 papers shown
Title
ColorBench: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness
ColorBench: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness
Yijun Liang
Ming Li
Chenrui Fan
Ziyue Li
Dang Nguyen
Kwesi Cobbina
Shweta Bhardwaj
Jiuhai Chen
Fuxiao Liu
Tianyi Zhou
VLMCoGe
90
2
0
10 Apr 2025
When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts
When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts
Jun Seong Kim
Kyaw Ye Thu
Javad Ismayilzada
Junyeong Park
Eunsu Kim
Huzama Ahmad
Na Min An
James Thorne
Alice Oh
61
1
0
21 Mar 2025
Qwen2.5-VL Technical Report
Qwen2.5-VL Technical Report
S. Bai
Keqin Chen
Xuejing Liu
Jialin Wang
Wenbin Ge
...
Zesen Cheng
Hang Zhang
Zhibo Yang
Haiyang Xu
Junyang Lin
VLM
336
685
0
20 Feb 2025
Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images
Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images
Shengguang Wu
Fan-Yun Sun
Kaiyue Wen
Nick Haber
VLM
132
3
0
19 Feb 2025
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language
  Models Alignment
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment
Lei Li
Zhihui Xie
Mukai Li
Shunian Chen
Peiyi Wang
L. Chen
Yazheng Yang
Benyou Wang
Dianbo Sui
Qiang Liu
VLMALM
82
28
0
12 Oct 2024
Evaluating Fairness in Large Vision-Language Models Across Diverse
  Demographic Attributes and Prompts
Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts
Xuyang Wu
Yuan Wang
Hsin-Tai Wu
Zhiqiang Tao
Yi Fang
VLM
67
10
0
25 Jun 2024
Understanding the Impact of Negative Prompts: When and How Do They Take
  Effect?
Understanding the Impact of Negative Prompts: When and How Do They Take Effect?
Yuanhao Ban
Ruochen Wang
Tianyi Zhou
Minhao Cheng
Boqing Gong
Cho-Jui Hsieh
85
18
0
05 Jun 2024
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
Phillip Howard
Kathleen C. Fraser
Anahita Bhiwandiwalla
S. Kiritchenko
118
13
0
30 May 2024
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery
Guan-Feng Wang
Long Bai
Wan Jun Nah
Jie Wang
Zhaoxi Zhang
Zhen Chen
Jinlin Wu
Mobarakol Islam
Hongbin Liu
Hongliang Ren
96
17
0
22 Mar 2024
A Unified Framework and Dataset for Assessing Societal Bias in
  Vision-Language Models
A Unified Framework and Dataset for Assessing Societal Bias in Vision-Language Models
Ashutosh Sathe
Prachi Jain
Sunayana Sitaram
83
2
0
21 Feb 2024
Examining Gender and Racial Bias in Large Vision-Language Models Using a
  Novel Dataset of Parallel Images
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Kathleen C. Fraser
S. Kiritchenko
90
40
0
08 Feb 2024
New Job, New Gender? Measuring the Social Bias in Image Generation
  Models
New Job, New Gender? Measuring the Social Bias in Image Generation Models
Wenxuan Wang
Haonan Bai
Jen-tse Huang
Yuxuan Wan
Youliang Yuan
Haoyi Qiu
Nanyun Peng
Michael R. Lyu
79
23
0
01 Jan 2024
SocialCounterfactuals: Probing and Mitigating Intersectional Social
  Biases in Vision-Language Models with Counterfactual Examples
SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples
Phillip Howard
Avinash Madasu
Tiep Le
Gustavo Lujan Moreno
Anahita Bhiwandiwalla
Vasudev Lal
99
24
0
30 Nov 2023
Generative Language Models Exhibit Social Identity Biases
Generative Language Models Exhibit Social Identity Biases
Tiancheng Hu
Yara Kyrychenko
Steve Rathje
Nigel Collier
S. V. D. Linden
Jon Roozenbeek
84
49
0
24 Oct 2023
SDXL: Improving Latent Diffusion Models for High-Resolution Image
  Synthesis
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
Dustin Podell
Zion English
Kyle Lacey
A. Blattmann
Tim Dockhorn
Jonas Muller
Joe Penna
Robin Rombach
247
2,440
0
04 Jul 2023
Gender Biases in Automatic Evaluation Metrics for Image Captioning
Gender Biases in Automatic Evaluation Metrics for Image Captioning
Haoyi Qiu
Zi-Yi Dou
Tianlu Wang
Asli Celikyilmaz
Nanyun Peng
EGVM
72
16
0
24 May 2023
Social Biases through the Text-to-Image Generation Lens
Social Biases through the Text-to-Image Generation Lens
Ranjita Naik
Besmira Nushi
169
125
0
30 Mar 2023
Debiasing Vision-Language Models via Biased Prompts
Debiasing Vision-Language Models via Biased Prompts
Ching-Yao Chuang
Varun Jampani
Yuanzhen Li
Antonio Torralba
Stefanie Jegelka
VLM
95
106
0
31 Jan 2023
Understanding and Countering Stereotypes: A Computational Approach to
  the Stereotype Content Model
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
56
40
0
04 Jun 2021
Color Channel Perturbation Attacks for Fooling Convolutional Neural
  Networks and A Defense Against Such Attacks
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks
Jayendra Kantipudi
S. Dubey
Soumendu Chakraborty
AAML
71
21
0
20 Dec 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
101
1,014
0
20 Apr 2020
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,301
0
27 Aug 2019
1