ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.02526
  4. Cited By
Theories of "Gender" in NLP Bias Research

Theories of "Gender" in NLP Bias Research

5 May 2022
Hannah Devinney
Jenny Björklund
H. Björklund
    AI4CE
ArXiv (abs)PDFHTML

Papers citing "Theories of "Gender" in NLP Bias Research"

50 / 117 papers shown
Title
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Po-Sen Huang
Huan Zhang
Ray Jiang
Robert Stanforth
Johannes Welbl
Jack W. Rae
Vishal Maini
Dani Yogatama
Pushmeet Kohli
96
217
0
08 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
105
229
0
04 Nov 2019
On the Unintended Social Bias of Training Language Generation Models
  with Data from Local Media
On the Unintended Social Bias of Training Language Generation Models with Data from Local Media
Omar U. Florez
45
5
0
01 Nov 2019
Probabilistic Bias Mitigation in Word Embeddings
Probabilistic Bias Mitigation in Word Embeddings
Hailey James
David Alvarez-Melis
34
4
0
31 Oct 2019
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference Resolution
Yang Trista Cao
Hal Daumé
91
145
0
30 Oct 2019
Man is to Person as Woman is to Location: Measuring Gender Bias in Named
  Entity Recognition
Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Ninareh Mehrabi
Thamme Gowda
Fred Morstatter
Nanyun Peng
Aram Galstyan
90
58
0
24 Oct 2019
Does Gender Matter? Towards Fairness in Dialogue Systems
Does Gender Matter? Towards Fairness in Dialogue Systems
Haochen Liu
Jamell Dacon
Wenqi Fan
Hui Liu
Zitao Liu
Jiliang Tang
142
144
0
16 Oct 2019
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in
  Toxic Comment Detection
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection
Ameya Vaidya
Feng Mai
Yue Ning
158
21
0
21 Sep 2019
Analysing Neural Language Models: Contextual Decomposition Reveals
  Default Reasoning in Number and Gender Assignment
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
Jaap Jumelet
Willem H. Zuidema
Dieuwke Hupkes
LRM
68
37
0
19 Sep 2019
A General Framework for Implicit and Explicit Debiasing of
  Distributional Word Vector Spaces
A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
Anne Lauscher
Goran Glavaš
Simone Paolo Ponzetto
Ivan Vulić
80
64
0
13 Sep 2019
Getting Gender Right in Neural Machine Translation
Getting Gender Right in Neural Machine Translation
Eva Vanmassenhove
Christian Hardmeier
Andy Way
85
197
0
11 Sep 2019
Examining Gender Bias in Languages with Grammatical Gender
Examining Gender Bias in Languages with Grammatical Gender
Pei Zhou
Weijia Shi
Jieyu Zhao
Kuan-Hao Huang
Muhao Chen
Ryan Cotterell
Kai-Wei Chang
67
106
0
05 Sep 2019
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
290
649
0
03 Sep 2019
It's All in the Name: Mitigating Gender Bias with Name-Based
  Counterfactual Data Substitution
It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution
Rowan Hall Maudslay
Hila Gonen
Ryan Cotterell
Simone Teufel
64
172
0
02 Sep 2019
(Male, Bachelor) and (Female, Ph.D) have different connotations:
  Parallelly Annotated Stylistic Language Dataset with Multiple Personas
(Male, Bachelor) and (Female, Ph.D) have different connotations: Parallelly Annotated Stylistic Language Dataset with Multiple Personas
Dongyeop Kang
Varun Gangal
Eduard H. Hovy
85
17
0
31 Aug 2019
Automatically Inferring Gender Associations from Language
Automatically Inferring Gender Associations from Language
Serina Chang
Kathleen McKeown
FaML
44
11
0
30 Aug 2019
On Measuring and Mitigating Biased Inferences of Word Embeddings
On Measuring and Mitigating Biased Inferences of Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
81
174
0
25 Aug 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDaFaML
578
4,391
0
23 Aug 2019
Understanding Undesirable Word Embedding Associations
Understanding Undesirable Word Embedding Associations
Kawin Ethayarajh
David Duvenaud
Graeme Hirst
FaML
49
125
0
18 Aug 2019
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost
Nithum Thain
Tolga Bolukbasi
FaML
63
50
0
07 Aug 2019
MSnet: A BERT-based Network for Gendered Pronoun Resolution
MSnet: A BERT-based Network for Gendered Pronoun Resolution
Zili Wang
47
4
0
01 Aug 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
66
47
0
24 Jun 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
108
563
0
21 Jun 2019
Considerations for the Interpretation of Bias Measures of Word
  Embeddings
Considerations for the Interpretation of Bias Measures of Word Embeddings
I. Mirzaev
Anthony Schulte
Michael D. Conover
Sam Shah
47
3
0
19 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
116
453
0
18 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
56
34
0
14 Jun 2019
Unsupervised Discovery of Gendered Language through Latent-Variable
  Modeling
Unsupervised Discovery of Gendered Language through Latent-Variable Modeling
Alexander Miserlis Hoyle
Lawrence Wolf-Sonkin
Hanna M. Wallach
Isabelle Augenstein
Ryan Cotterell
66
52
0
11 Jun 2019
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in
  Languages with Rich Morphology
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Ran Zmigrod
Sabrina J. Mielke
Hanna M. Wallach
Ryan Cotterell
84
283
0
11 Jun 2019
Gendered Pronoun Resolution using BERT and an extractive question
  answering formulation
Gendered Pronoun Resolution using BERT and an extractive question answering formulation
Rakesh Chada
FaML
52
10
0
09 Jun 2019
Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by
  Evidence Pooling
Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by Evidence Pooling
Sandeep Attree
36
14
0
03 Jun 2019
Gender-preserving Debiasing for Pre-trained Word Embeddings
Gender-preserving Debiasing for Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
52
131
0
03 Jun 2019
Resolving Gendered Ambiguous Pronouns with BERT
Resolving Gendered Ambiguous Pronouns with BERT
Kellie Webster
Marta Recasens
Ken Krige
Vera Axelrod
Denis Logvinenko
Jason Baldridge
81
54
0
03 Jun 2019
Evaluating Gender Bias in Machine Translation
Evaluating Gender Bias in Machine Translation
Gabriel Stanovsky
Noah A. Smith
Luke Zettlemoyer
95
406
0
03 Jun 2019
Reducing Gender Bias in Word-Level Language Models with a
  Gender-Equalizing Loss Function
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
Yusu Qian
Urwa Muaz
Ben Zhang
J. Hyun
FaML
85
96
0
30 May 2019
On Measuring Gender Bias in Translation of Gender-neutral Pronouns
On Measuring Gender Bias in Translation of Gender-neutral Pronouns
Won Ik Cho
Jiwon Kim
Seokhwan Kim
N. Kim
68
88
0
28 May 2019
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Malvina Nissim
Rik van Noord
Rob van der Goot
FaML
76
103
0
23 May 2019
Look Again at the Syntax: Relational Graph Convolutional Network for
  Gendered Ambiguous Pronoun Resolution
Look Again at the Syntax: Relational Graph Convolutional Network for Gendered Ambiguous Pronoun Resolution
Yinchuan Xu
Junlin Yang
GNN
58
21
0
21 May 2019
Anonymized BERT: An Augmentation Approach to the Gendered Pronoun
  Resolution Challenge
Anonymized BERT: An Augmentation Approach to the Gendered Pronoun Resolution Challenge
Bo Liu
54
8
0
06 May 2019
Are We Consistently Biased? Multidimensional Analysis of Biases in
  Distributional Word Vectors
Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors
Anne Lauscher
Goran Glavaš
91
55
0
26 Apr 2019
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Christine Basta
Marta R. Costa-jussá
Noe Casas
64
194
0
18 Apr 2019
What's in a Name? Reducing Bias in Bios without Access to Protected
  Attributes
What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Alexey Romanov
Maria De-Arteaga
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Anna Rumshisky
Adam Tauman Kalai
59
81
0
10 Apr 2019
Gender Bias in Contextualized Word Embeddings
Gender Bias in Contextualized Word Embeddings
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
FaML
123
421
0
05 Apr 2019
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
126
329
0
05 Apr 2019
Black is to Criminal as Caucasian is to Police: Detecting and Removing
  Multiclass Bias in Word Embeddings
Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings
Thomas Manzini
Y. Lim
Yulia Tsvetkov
A. Black
FaML
99
308
0
03 Apr 2019
On Measuring Social Biases in Sentence Encoders
On Measuring Social Biases in Sentence Encoders
Chandler May
Alex Jinpeng Wang
Shikha Bordia
Samuel R. Bowman
Rachel Rudinger
108
607
0
25 Mar 2019
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
Hila Gonen
Yoav Goldberg
118
571
0
09 Mar 2019
Bias in Bios: A Case Study of Semantic Representation Bias in a
  High-Stakes Setting
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Maria De-Arteaga
Alexey Romanov
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Adam Tauman Kalai
202
462
0
27 Jan 2019
Attenuating Bias in Word Vectors
Attenuating Bias in Word Vectors
Sunipa Dev
J. M. Phillips
FaML
74
151
0
23 Jan 2019
Equalizing Gender Biases in Neural Machine Translation with Word
  Embeddings Techniques
Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques
Joel Escudé Font
Marta R. Costa-jussá
79
170
0
10 Jan 2019
What are the biases in my word embedding?
What are the biases in my word embedding?
Nathaniel Swinger
Maria De-Arteaga
IV NeilThomasHeffernan
Mark D. M. Leiserson
Adam Tauman Kalai
60
104
0
20 Dec 2018
Previous
123
Next