Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.14552
Cited By
A Robust Bias Mitigation Procedure Based on the Stereotype Content Model
26 October 2022
Eddie L. Ungless
Amy Rafferty
Hrichika Nag
Bjorn Ross
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Robust Bias Mitigation Procedure Based on the Stereotype Content Model"
21 / 21 papers shown
Title
Relative Bias: A Comparative Framework for Quantifying Bias in LLMs
Alireza Arbabi
Florian Kerschbaum
125
0
0
22 May 2025
Gender-Neutral Large Language Models for Medical Applications: Reducing Bias in PubMed Abstracts
Elizabeth Schaefer
Kirk Roberts
117
0
0
10 Jan 2025
Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
Carolin M. Schuster
Maria-Alexandra Dinisor
Shashwat Ghatiwala
Georg Groh
100
1
0
25 Nov 2024
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
70
23
0
03 Sep 2023
Theory-Grounded Measurement of U.S. Social Stereotypes in English Language Models
Yang Trista Cao
Anna Sotnikova
Hal Daumé
Rachel Rudinger
L. Zou
37
49
0
23 Jun 2022
On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations
Yang Trista Cao
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
Varun Kumar
Jwala Dhamala
Aram Galstyan
31
94
0
25 Mar 2022
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Nicholas Meade
Elinor Poole-Dayan
Siva Reddy
48
127
0
16 Oct 2021
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
30
39
0
04 Jun 2021
Generating Datasets with Pretrained Language Models
Timo Schick
Hinrich Schütze
126
235
0
15 Apr 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
282
380
0
28 Feb 2021
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
231
140
0
23 Jan 2021
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Seraphina Goldfarb-Tarrant
Rebecca Marchant
Ricardo Muñoz Sánchez
Mugdha Pandya
Adam Lopez
70
173
0
31 Dec 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
62
256
0
12 Oct 2020
Towards Debiasing NLU Models from Unknown Biases
Prasetya Ajie Utama
N. Moosavi
Iryna Gurevych
43
155
0
25 Sep 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
27
237
0
06 Jun 2020
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
69
979
0
20 Apr 2020
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
254
631
0
03 Sep 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.1K
93,936
0
11 Oct 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
712
7,080
0
20 Apr 2018
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Vicente Ordonez
Kai-Wei Chang
85
919
0
18 Apr 2018
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
130
11,520
0
15 Feb 2018
1