ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.11261
  4. Cited By
Contrastive Language-Vision AI Models Pretrained on Web-Scraped
  Multimodal Data Exhibit Sexual Objectification Bias

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

21 December 2022
Robert Wolfe
Yiwei Yang
Billy Howe
Aylin Caliskan
    DiffM
ArXivPDFHTML

Papers citing "Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias"

36 / 36 papers shown
Title
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
45
0
0
25 Mar 2025
Hyperbolic Safety-Aware Vision-Language Models
Hyperbolic Safety-Aware Vision-Language Models
Tobia Poppi
Tejaswi Kasarla
Pascal Mettes
Lorenzo Baraldi
Rita Cucchiara
VLM
MU
66
0
0
15 Mar 2025
VisBias: Measuring Explicit and Implicit Social Biases in Vision Language Models
VisBias: Measuring Explicit and Implicit Social Biases in Vision Language Models
Jen-tse Huang
Jiantong Qin
Jianping Zhang
Youliang Yuan
Wenxuan Wang
Jieyu Zhao
VLM
64
0
0
10 Mar 2025
VLMs as GeoGuessr Masters: Exceptional Performance, Hidden Biases, and Privacy Risks
VLMs as GeoGuessr Masters: Exceptional Performance, Hidden Biases, and Privacy Risks
Jingyuan Huang
Jen-tse Huang
Ziyi Liu
Xiaoyuan Liu
Wenxuan Wang
Jieyu Zhao
55
1
0
16 Feb 2025
Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Kshitish Ghate
Isaac Slaughter
Kyra Wilson
Mona Diab
Aylin Caliskan
86
1
0
11 Feb 2025
ML-EAT: A Multilevel Embedding Association Test for Interpretable and
  Transparent Social Science
ML-EAT: A Multilevel Embedding Association Test for Interpretable and Transparent Social Science
Robert Wolfe
Alexis Hiniker
Bill Howe
48
0
0
04 Aug 2024
Grounding and Evaluation for Large Language Models: Practical Challenges
  and Lessons Learned (Survey)
Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey)
K. Kenthapadi
M. Sameki
Ankur Taly
HILM
ELM
AILaw
44
12
0
10 Jul 2024
Fairness and Bias in Multimodal AI: A Survey
Fairness and Bias in Multimodal AI: A Survey
Tosin P. Adewumi
Lama Alkhaled
Namrata Gurung
G. V. Boven
Irene Pagliai
58
9
0
27 Jun 2024
Who's in and who's out? A case study of multimodal CLIP-filtering in
  DataComp
Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp
Rachel Hong
William Agnew
Tadayoshi Kohno
Jamie Morgenstern
27
9
0
13 May 2024
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
Paul Röttger
Fabio Pernisi
Bertie Vidgen
Dirk Hovy
ELM
KELM
60
32
0
08 Apr 2024
Reflecting the Male Gaze: Quantifying Female Objectification in 19th and
  20th Century Novels
Reflecting the Male Gaze: Quantifying Female Objectification in 19th and 20th Century Novels
Kexin Luo
Yue Mao
Bei Zhang
Sophie Hao
32
1
0
25 Mar 2024
Just Say the Name: Online Continual Learning with Category Names Only
  via Data Generation
Just Say the Name: Online Continual Learning with Category Names Only via Data Generation
Minhyuk Seo
Diganta Misra
Seongwon Cho
Minjae Lee
Jonghyun Choi
CLL
41
7
0
16 Mar 2024
Examining Gender and Racial Bias in Large Vision-Language Models Using a
  Novel Dataset of Parallel Images
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Kathleen C. Fraser
S. Kiritchenko
49
34
0
08 Feb 2024
Harm Amplification in Text-to-Image Models
Harm Amplification in Text-to-Image Models
Susan Hao
Renee Shelby
Yuchi Liu
Hansa Srinivasan
Mukul Bhutani
Burcu Karagol Ayan
Ryan Poplin
Shivani Poddar
Sarah Laszlo
43
7
0
01 Feb 2024
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and
  Prompt Engineering May Not Help You
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You
Felix Friedrich
Katharina Hämmerl
P. Schramowski
Manuel Brack
Jindrich Libovický
Kristian Kersting
Alexander Fraser
EGVM
32
10
0
29 Jan 2024
From Bytes to Biases: Investigating the Cultural Self-Perception of
  Large Language Models
From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models
Wolfgang Messner
Tatum Greene
Josephine Matalone
32
4
0
21 Dec 2023
From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the
  Generative Artificial Intelligence (AI) Research Landscape
From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape
Timothy R. McIntosh
Teo Susnjak
Tong Liu
Paul Watters
Malka N. Halgamuge
94
48
0
18 Dec 2023
Stable Diffusion Exposed: Gender Bias from Prompt to Image
Stable Diffusion Exposed: Gender Bias from Prompt to Image
Yankun Wu
Yuta Nakashima
Noa Garcia
28
16
0
05 Dec 2023
Finetuning Text-to-Image Diffusion Models for Fairness
Finetuning Text-to-Image Diffusion Models for Fairness
Xudong Shen
Chao Du
Tianyu Pang
Min Lin
Yongkang Wong
Mohan S. Kankanhalli
26
50
0
11 Nov 2023
'Person' == Light-skinned, Western Man, and Sexualization of Women of
  Color: Stereotypes in Stable Diffusion
'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion
Sourojit Ghosh
Aylin Caliskan
49
30
0
30 Oct 2023
Pre-trained Speech Processing Models Contain Human-Like Biases that
  Propagate to Speech Emotion Recognition
Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
Isaac Slaughter
Craig Greenberg
Reva Schwartz
Aylin Caliskan
35
4
0
29 Oct 2023
Regulation and NLP (RegNLP): Taming Large Language Models
Regulation and NLP (RegNLP): Taming Large Language Models
Catalina Goanta
Nikolaos Aletras
Ilias Chalkidis
S. Ranchordas
Gerasimos Spanakis
AILaw
15
3
0
09 Oct 2023
Biased Attention: Do Vision Transformers Amplify Gender Bias More than
  Convolutional Neural Networks?
Biased Attention: Do Vision Transformers Amplify Gender Bias More than Convolutional Neural Networks?
Abhishek Mandal
Susan Leavy
Suzanne Little
ViT
27
5
0
15 Sep 2023
Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Is the U.S. Legal System Ready for AI's Challenges to Human Values?
Inyoung Cheong
Aylin Caliskan
Tadayoshi Kohno
SILM
ELM
AILaw
30
1
0
30 Aug 2023
Trustworthy Representation Learning Across Domains
Trustworthy Representation Learning Across Domains
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Zhixuan Chu
Xiang Yu
Sheng Li
FaML
AI4TS
39
2
0
23 Aug 2023
AI's Regimes of Representation: A Community-centered Study of
  Text-to-Image Models in South Asia
AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Rida Qadri
Renee Shelby
Cynthia L. Bennett
Emily Denton
29
67
0
19 May 2023
Fairness in Language Models Beyond English: Gaps and Challenges
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh
Sunayana Sitaram
Monojit Choudhury
32
23
0
24 Feb 2023
Easily Accessible Text-to-Image Generation Amplifies Demographic
  Stereotypes at Large Scale
Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Federico Bianchi
Pratyusha Kalluri
Esin Durmus
Faisal Ladhak
Myra Cheng
Debora Nozza
Tatsunori Hashimoto
Dan Jurafsky
James Zou
Aylin Caliskan
DiffM
VLM
39
290
0
07 Nov 2022
When and why vision-language models behave like bags-of-words, and what
  to do about it?
When and why vision-language models behave like bags-of-words, and what to do about it?
Mert Yuksekgonul
Federico Bianchi
Pratyusha Kalluri
Dan Jurafsky
James Zou
VLM
CoGe
30
364
0
04 Oct 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
Guosheng Lin
MLLM
BDL
VLM
CLIP
392
4,171
0
28 Jan 2022
Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias
  in Image Search
Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search
Jialu Wang
Yang Liu
Qing Guo
FaML
157
95
0
12 Sep 2021
Open-vocabulary Object Detection via Vision and Language Knowledge
  Distillation
Open-vocabulary Object Detection via Vision and Language Knowledge Distillation
Xiuye Gu
Nayeon Lee
Weicheng Kuo
Huayu Chen
VLM
ObjD
225
899
0
28 Apr 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,805
0
24 Feb 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
337
3,720
0
11 Feb 2021
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
223
620
0
03 Sep 2019
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
306
10,378
0
12 Dec 2018
1