ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.14534
  4. Cited By
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias

Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias

27 October 2020
Marion Bartl
Malvina Nissim
Albert Gatt
ArXivPDFHTML

Papers citing "Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias"

50 / 66 papers shown
Title
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
Brandon Smith
Mohamed Reda Bouadjenek
Tahsin Alamgir Kheya
Phillip Dawson
S. Aryal
ALM
ELM
26
0
0
14 May 2025
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Surabhi Bhargava
Moumita Sinha
34
0
0
05 May 2025
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Orfeas Menis Mastromichalakis
Giorgos Filandrianos
Maria Symeonaki
Giorgos Stamou
62
0
0
06 Mar 2025
Rethinking LLM Bias Probing Using Lessons from the Social Sciences
Kirsten N. Morehouse
S. Swaroop
Weiwei Pan
48
1
0
28 Feb 2025
Robust Bias Detection in MLMs and its Application to Human Trait Ratings
Robust Bias Detection in MLMs and its Application to Human Trait Ratings
Ingroj Shrestha
Louis Tay
Padmini Srinivasan
83
0
0
24 Feb 2025
Detecting Linguistic Bias in Government Documents Using Large language Models
Detecting Linguistic Bias in Government Documents Using Large language Models
Milena de Swart
Floris den Hengst
Jieying Chen
61
0
0
20 Feb 2025
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Angelina Wang
Michelle Phan
Daniel E. Ho
Sanmi Koyejo
54
2
0
04 Feb 2025
LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases
LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases
Dylan Bouchard
Mohit Singh Chauhan
David Skarbrevik
Viren Bajaj
Zeya Ahmad
38
0
0
06 Jan 2025
Everyone deserves their voice to be heard: Analyzing Predictive Gender
  Bias in ASR Models Applied to Dutch Speech Data
Everyone deserves their voice to be heard: Analyzing Predictive Gender Bias in ASR Models Applied to Dutch Speech Data
Rik Raes
Saskia Lensink
Mykola Pechenizkiy
29
0
0
14 Nov 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
46
0
0
06 Oct 2024
Are Female Carpenters like Blue Bananas? A Corpus Investigation of
  Occupation Gender Typicality
Are Female Carpenters like Blue Bananas? A Corpus Investigation of Occupation Gender Typicality
Da Ju
Karen Ulrich
Adina Williams
35
2
0
06 Aug 2024
Downstream bias mitigation is all you need
Downstream bias mitigation is all you need
Arkadeep Baksi
Rahul Singh
Tarun Joshi
AI4CE
22
0
0
01 Aug 2024
Understanding the Interplay of Scale, Data, and Bias in Language Models:
  A Case Study with BERT
Understanding the Interplay of Scale, Data, and Bias in Language Models: A Case Study with BERT
Muhammad Ali
Swetasudha Panda
Qinlan Shen
Michael Wick
Ari Kobren
MILM
34
3
0
25 Jul 2024
Exploring Bengali Religious Dialect Biases in Large Language Models with
  Evaluation Perspectives
Exploring Bengali Religious Dialect Biases in Large Language Models with Evaluation Perspectives
Azmine Toushik Wasi
Raima Islam
Mst Rafia Islam
Taki Hasan Rafi
Dong-Kyu Chae
37
3
0
25 Jul 2024
Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation
  with Ambiguous Attitude Words
Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words
Yijie Chen
Yijin Liu
Fandong Meng
Jinan Xu
Jinan Xu
Jie Zhou
36
1
0
23 Jul 2024
Evaluating Nuanced Bias in Large Language Model Free Response Answers
Evaluating Nuanced Bias in Large Language Model Free Response Answers
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Moumita Sinha
35
1
0
11 Jul 2024
Leveraging Large Language Models to Measure Gender Bias in Gendered
  Languages
Leveraging Large Language Models to Measure Gender Bias in Gendered Languages
Erik Derner
Sara Sansalvador de la Fuente
Yoan Gutiérrez
Paloma Moreda
Nuria Oliver
32
1
0
19 Jun 2024
The Life Cycle of Large Language Models: A Review of Biases in Education
The Life Cycle of Large Language Models: A Review of Biases in Education
Jinsook Lee
Yann Hicke
Renzhe Yu
Christopher A. Brooks
René F. Kizilcec
AI4Ed
42
1
0
03 Jun 2024
Large Language Model Bias Mitigation from the Perspective of Knowledge
  Editing
Large Language Model Bias Mitigation from the Perspective of Knowledge Editing
Ruizhe Chen
Yichen Li
Zikai Xiao
Zuo-Qiang Liu
KELM
40
13
0
15 May 2024
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Zichong Wang
Wenbin Zhang
AILaw
43
33
0
31 Mar 2024
Potential and Challenges of Model Editing for Social Debiasing
Potential and Challenges of Model Editing for Social Debiasing
Jianhao Yan
Futing Wang
Yafu Li
Yue Zhang
KELM
68
9
0
21 Feb 2024
Large Language Models are Geographically Biased
Large Language Models are Geographically Biased
Rohin Manvi
Samar Khanna
Marshall Burke
David B. Lobell
Stefano Ermon
39
41
0
05 Feb 2024
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and
  Prompt Engineering May Not Help You
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You
Felix Friedrich
Katharina Hämmerl
P. Schramowski
Manuel Brack
Jindrich Libovický
Kristian Kersting
Alexander Fraser
EGVM
24
10
0
29 Jan 2024
Multilingual large language models leak human stereotypes across
  language boundaries
Multilingual large language models leak human stereotypes across language boundaries
Yang Trista Cao
Anna Sotnikova
Jieyu Zhao
Linda X. Zou
Rachel Rudinger
Hal Daumé
PILM
33
10
0
12 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
43
11
0
03 Dec 2023
Evaluating Large Language Models through Gender and Racial Stereotypes
Evaluating Large Language Models through Gender and Racial Stereotypes
Ananya Malik
ELM
17
3
0
24 Nov 2023
Benefits and Harms of Large Language Models in Digital Mental Health
Benefits and Harms of Large Language Models in Digital Mental Health
Munmun De Choudhury
Sachin R. Pendse
Neha Kumar
LM&MA
AI4MH
30
41
0
07 Nov 2023
Unlocking Bias Detection: Leveraging Transformer-Based Models for
  Content Analysis
Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis
Shaina Raza
Oluwanifemi Bamgbose
Veronica Chatrath
Shardul Ghuge
Yan Sidyakin
Abdullah Y. Muaad
19
11
0
30 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
26
490
0
02 Sep 2023
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language
  Model Bias
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
Vipul Gupta
Pranav Narayanan Venkit
Hugo Laurenccon
Shomir Wilson
R. Passonneau
46
12
0
24 Aug 2023
Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser
  with Prompts
Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts
Shaina Raza
Chen Ding
D. Pandya
FaML
21
2
0
14 Jul 2023
How Different Is Stereotypical Bias Across Languages?
How Different Is Stereotypical Bias Across Languages?
Ibrahim Tolga Ozturk
R. Nedelchev
C. Heumann
Esteban Garces Arias
Marius Roger
Bernd Bischl
Matthias Aßenmacher
28
2
0
14 Jul 2023
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment
  Rating in a Realistic Downstream Classification Task
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task
Sophie F. Jentzsch
Cigdem Turan
13
31
0
27 Jun 2023
Gender Bias in Transformer Models: A comprehensive survey
Gender Bias in Transformer Models: A comprehensive survey
Praneeth Nemani
Yericherla Deepak Joel
Pallavi Vijay
Farhana Ferdousi Liza
24
3
0
18 Jun 2023
Politeness Stereotypes and Attack Vectors: Gender Stereotypes in
  Japanese and Korean Language Models
Politeness Stereotypes and Attack Vectors: Gender Stereotypes in Japanese and Korean Language Models
Victor Steinborn
Antonis Maronikolakis
Hinrich Schütze
31
0
0
16 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
21
0
13 Jun 2023
Trade-Offs Between Fairness and Privacy in Language Modeling
Trade-Offs Between Fairness and Privacy in Language Modeling
Cleo Matzken
Steffen Eger
Ivan Habernal
SILM
41
6
0
24 May 2023
On the Independence of Association Bias and Empirical Fairness in
  Language Models
On the Independence of Association Bias and Empirical Fairness in Language Models
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
26
25
0
20 Apr 2023
Measuring Gender Bias in West Slavic Language Models
Measuring Gender Bias in West Slavic Language Models
Sandra Martinková
Karolina Stañczak
Isabelle Augenstein
21
8
0
12 Apr 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
27
103
0
20 Mar 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence
  Reasoning
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
26
7
0
10 Mar 2023
BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models
BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models
Rafal Kocielnik
Shrimai Prabhumoye
Vivian Zhang
Roy Jiang
R. Alvarez
Anima Anandkumar
44
6
0
14 Feb 2023
SensePOLAR: Word sense aware interpretability for pre-trained contextual
  word embeddings
SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings
Jan Engler
Sandipan Sikdar
Marlene Lutz
M. Strohmaier
32
7
0
11 Jan 2023
Can Current Task-oriented Dialogue Models Automate Real-world Scenarios
  in the Wild?
Can Current Task-oriented Dialogue Models Automate Real-world Scenarios in the Wild?
Sang-Woo Lee
Sungdong Kim
Donghyeon Ko
Dong-hyun Ham
Youngki Hong
...
Wangkyo Jung
Kyunghyun Cho
Donghyun Kwak
H. Noh
W. Park
51
1
0
20 Dec 2022
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language
  Models
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language Models
Yizhi Li
Ge Zhang
Bohao Yang
Chenghua Lin
Shi Wang
Anton Ragni
Jie Fu
22
9
0
05 Nov 2022
MABEL: Attenuating Gender Bias using Textual Entailment Data
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He
Mengzhou Xia
C. Fellbaum
Danqi Chen
24
32
0
26 Oct 2022
Detecting Unintended Social Bias in Toxic Language Datasets
Detecting Unintended Social Bias in Toxic Language Datasets
Nihar Ranjan Sahoo
Himanshu Gupta
P. Bhattacharyya
15
18
0
21 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and
  their Social Biases in Downstream Tasks
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
26
41
0
06 Oct 2022
Efficient Gender Debiasing of Pre-trained Indic Language Models
Efficient Gender Debiasing of Pre-trained Indic Language Models
Neeraja Kirtane
V. Manushree
Aditya Kane
11
3
0
08 Sep 2022
12
Next