ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.11471
  4. Cited By
Choose Your Lenses: Flaws in Gender Bias Evaluation

Choose Your Lenses: Flaws in Gender Bias Evaluation

20 October 2022
Hadas Orgad
Yonatan Belinkov
ArXiv (abs)PDFHTML

Papers citing "Choose Your Lenses: Flaws in Gender Bias Evaluation"

50 / 51 papers shown
Title
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
52
18
0
14 Apr 2022
On the Intrinsic and Extrinsic Fairness Evaluation Metrics for
  Contextualized Language Representations
On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations
Yang Trista Cao
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
Varun Kumar
Jwala Dhamala
Aram Galstyan
42
99
0
25 Mar 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
74
116
0
28 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
51
24
0
14 Dec 2021
Harms of Gender Exclusivity and Challenges in Non-Binary Representation
  in Language Technologies
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies
Sunipa Dev
Masoud Monajatipoor
Anaelia Ovalle
Arjun Subramonian
J. M. Phillips
Kai-Wei Chang
114
173
0
27 Aug 2021
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
  Generation
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Jwala Dhamala
Tony Sun
Varun Kumar
Satyapriya Krishna
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
88
395
0
27 Jan 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
43
94
0
24 Jan 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
79
38
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
238
142
0
23 Jan 2021
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Seraphina Goldfarb-Tarrant
Rebecca Marchant
Ricardo Muñoz Sánchez
Mugdha Pandya
Adam Lopez
102
179
0
31 Dec 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
71
125
0
27 Oct 2020
Mitigating Gender Bias in Machine Translation with Target Gender
  Annotations
Mitigating Gender Bias in Machine Translation with Target Gender Annotations
Arturs Stafanovivcs
Toms Bergmanis
Marcis Pinnis
44
58
0
13 Oct 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
79
259
0
12 Oct 2020
Neural Machine Translation Doesn't Translate Gender Coreference Right
  Unless You Make It
Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It
Danielle Saunders
Rosie Sallis
Bill Byrne
60
64
0
11 Oct 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
121
682
0
30 Sep 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings
  Contain a Distribution of Human-like Biases
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
39
240
0
06 Jun 2020
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
  Proximities in Word Embeddings
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings
Vaibhav Kumar
Tenzin Singhay Bhotia
Vaibhav Kumar
Tanmoy Chakraborty
CVBM
72
46
0
02 Jun 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
155
1,242
0
28 May 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
101
1,007
0
20 Apr 2020
Reducing Gender Bias in Neural Machine Translation as a Domain
  Adaptation Problem
Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem
Danielle Saunders
Bill Byrne
AI4CE
118
140
0
09 Apr 2020
Neutralizing Gender Bias in Word Embedding with Latent Disentanglement
  and Counterfactual Generation
Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation
Seung-Jae Shin
Kyungwoo Song
Joonho Jang
Hyemi Kim
Weonyoung Joo
Il-Chul Moon
70
21
0
07 Apr 2020
Information-Theoretic Probing with Minimum Description Length
Information-Theoretic Probing with Minimum Description Length
Elena Voita
Ivan Titov
85
275
0
27 Mar 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
84
208
0
10 Nov 2019
Designing and Interpreting Probes with Control Tasks
Designing and Interpreting Probes with Control Tasks
John Hewitt
Percy Liang
76
537
0
08 Sep 2019
It's All in the Name: Mitigating Gender Bias with Name-Based
  Counterfactual Data Substitution
It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution
Rowan Hall Maudslay
Hila Gonen
Ryan Cotterell
Simone Teufel
57
170
0
02 Sep 2019
On Measuring and Mitigating Biased Inferences of Word Embeddings
On Measuring and Mitigating Biased Inferences of Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
76
172
0
25 Aug 2019
Understanding Undesirable Word Embedding Associations
Understanding Undesirable Word Embedding Associations
Kawin Ethayarajh
David Duvenaud
Graeme Hirst
FaML
47
125
0
18 Aug 2019
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost
Nithum Thain
Tolga Bolukbasi
FaML
63
50
0
07 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
662
24,464
0
26 Jul 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
108
559
0
21 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
106
450
0
18 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
52
34
0
14 Jun 2019
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in
  Languages with Rich Morphology
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Ran Zmigrod
Sabrina J. Mielke
Hanna M. Wallach
Ryan Cotterell
68
282
0
11 Jun 2019
Gender-preserving Debiasing for Pre-trained Word Embeddings
Gender-preserving Debiasing for Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
47
131
0
03 Jun 2019
Resolving Gendered Ambiguous Pronouns with BERT
Resolving Gendered Ambiguous Pronouns with BERT
Kellie Webster
Marta Recasens
Ken Krige
Vera Axelrod
Denis Logvinenko
Jason Baldridge
64
54
0
03 Jun 2019
Reducing Gender Bias in Word-Level Language Models with a
  Gender-Equalizing Loss Function
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
Yusu Qian
Urwa Muaz
Ben Zhang
J. Hyun
FaML
56
95
0
30 May 2019
Gender Bias in Contextualized Word Embeddings
Gender Bias in Contextualized Word Embeddings
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
FaML
123
421
0
05 Apr 2019
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
114
326
0
05 Apr 2019
On Measuring Social Biases in Sentence Encoders
On Measuring Social Biases in Sentence Encoders
Chandler May
Alex Jinpeng Wang
Shikha Bordia
Samuel R. Bowman
Rachel Rudinger
99
602
0
25 Mar 2019
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
Hila Gonen
Yoav Goldberg
103
571
0
09 Mar 2019
Bias in Bios: A Case Study of Semantic Representation Bias in a
  High-Stakes Setting
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Maria De-Arteaga
Alexey Romanov
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Adam Tauman Kalai
186
458
0
27 Jan 2019
Reducing Gender Bias in Abusive Language Detection
Reducing Gender Bias in Abusive Language Detection
Ji Ho Park
Jamin Shin
Pascale Fung
FaML
51
340
0
22 Aug 2018
Adversarial Removal of Demographic Attributes from Text Data
Adversarial Removal of Demographic Attributes from Text Data
Yanai Elazar
Yoav Goldberg
FaML
104
309
0
20 Aug 2018
Towards Robust and Privacy-preserving Text Representations
Towards Robust and Privacy-preserving Text Representations
Yitong Li
Timothy Baldwin
Trevor Cohn
75
167
0
16 May 2018
Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems
Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems
S. Kiritchenko
Saif M. Mohammad
FaML
86
439
0
11 May 2018
Gender Bias in Coreference Resolution
Gender Bias in Coreference Resolution
Rachel Rudinger
Jason Naradowsky
Brian Leonard
Benjamin Van Durme
67
642
0
25 Apr 2018
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Vicente Ordonez
Kai-Wei Chang
122
936
0
18 Apr 2018
Mitigating Unwanted Biases with Adversarial Learning
Mitigating Unwanted Biases with Adversarial Learning
B. Zhang
Blake Lemoine
Margaret Mitchell
FaML
197
1,389
0
22 Jan 2018
Men Also Like Shopping: Reducing Gender Bias Amplification using
  Corpus-level Constraints
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Vicente Ordonez
Kai-Wei Chang
FaML
97
971
0
29 Jul 2017
Semantics derived automatically from language corpora contain human-like
  biases
Semantics derived automatically from language corpora contain human-like biases
Aylin Caliskan
J. Bryson
Arvind Narayanan
213
2,670
0
25 Aug 2016
12
Next