Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.08976
Cited By
Mitigating Gender Bias in Natural Language Processing: Literature Review
21 June 2019
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mitigating Gender Bias in Natural Language Processing: Literature Review"
50 / 278 papers shown
Title
Hope Speech detection in under-resourced Kannada language
Adeep Hande
R. Priyadharshini
Anbukkarasi Sampath
K. Thamburaj
Prabakaran Chandran
Bharathi Raja Chakravarthi
16
28
0
10 Aug 2021
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
31
80
0
07 Aug 2021
Your fairness may vary: Pretrained language model fairness in toxic text classification
Ioana Baldini
Dennis L. Wei
K. Ramamurthy
Mikhail Yurochkin
Moninder Singh
26
53
0
03 Aug 2021
Interactive Storytelling for Children: A Case-study of Design and Development Considerations for Ethical Conversational AI
J. Chubb
S. Missaoui
S. Concannon
Liam Maloney
James Alfred Walker
13
29
0
20 Jul 2021
Intersectional Bias in Causal Language Models
Liam Magee
Lida Ghahremanlou
K. Soldatić
S. Robertson
191
31
0
16 Jul 2021
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
21
104
0
28 Jun 2021
Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang
Chiyu Wu
Louis-Philippe Morency
Ruslan Salakhutdinov
36
380
0
24 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
36
122
0
21 Jun 2021
Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification
Yada Pruksachatkun
Satyapriya Krishna
Jwala Dhamala
Rahul Gupta
Kai-Wei Chang
14
32
0
21 Jun 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
45
178
0
07 Jun 2021
Towards Equal Gender Representation in the Annotations of Toxic Language Detection
Elizabeth Excell
Noura Al Moubayed
19
14
0
04 Jun 2021
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Jieyu Zhao
Daniel Khashabi
Tushar Khot
Ashish Sabharwal
Kai-Wei Chang
KELM
18
49
0
02 Jun 2021
Changing the World by Changing the Data
Anna Rogers
16
71
0
28 May 2021
How to Split: the Effect of Word Segmentation on Gender Bias in Speech Translation
Marco Gaido
Beatrice Savoldi
L. Bentivogli
Matteo Negri
Marco Turchi
65
15
0
28 May 2021
Metadata Normalization
Mandy Lu
Qingyu Zhao
Jiequan Zhang
K. Pohl
L. Fei-Fei
Juan Carlos Niebles
Ehsan Adeli
17
20
0
19 Apr 2021
Human-Imitating Metrics for Training and Evaluating Privacy Preserving Emotion Recognition Models Using Sociolinguistic Knowledge
Mimansa Jaiswal
E. Provost
31
0
0
18 Apr 2021
Improving Gender Translation Accuracy with Filtered Self-Training
Prafulla Kumar Choubey
Anna Currey
Prashant Mathur
Georgiana Dinu
15
10
0
15 Apr 2021
First the worst: Finding better gender translations during beam search
D. Saunders
Rosie Sallis
Bill Byrne
22
27
0
15 Apr 2021
Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey
Danielle Saunders
AI4CE
25
85
0
14 Apr 2021
Gender Bias in Machine Translation
Beatrice Savoldi
Marco Gaido
L. Bentivogli
Matteo Negri
Marco Turchi
64
192
0
13 Apr 2021
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
13
0
0
13 Apr 2021
HLE-UPC at SemEval-2021 Task 5: Multi-Depth DistilBERT for Toxic Spans Detection
Rafel Palliser
Albert Rial
6
6
0
01 Apr 2021
SelfExplain: A Self-Explaining Architecture for Neural Text Classifiers
Dheeraj Rajagopal
Vidhisha Balachandran
Eduard H. Hovy
Yulia Tsvetkov
MILM
SSL
FAtt
AI4TS
19
65
0
23 Mar 2021
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Ninareh Mehrabi
Pei Zhou
Fred Morstatter
Jay Pujara
Xiang Ren
Aram Galstyan
AILaw
13
44
0
21 Mar 2021
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
33
281
0
08 Mar 2021
Exploring a Makeup Support System for Transgender Passing based on Automatic Gender Recognition
T. Chong
Nolwenn Maudet
Katsuki Harima
Takeo Igarashi
11
15
0
08 Mar 2021
They, Them, Theirs: Rewriting with Gender-Neutral English
Tony Sun
Kellie Webster
Apurva Shah
William Yang Wang
Melvin Johnson
19
59
0
12 Feb 2021
Machine Translationese: Effects of Algorithmic Bias on Linguistic Complexity in Machine Translation
Eva Vanmassenhove
D. Shterionov
M. Gwilliam
14
94
1
30 Jan 2021
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Jwala Dhamala
Tony Sun
Varun Kumar
Satyapriya Krishna
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
16
372
0
27 Jan 2021
Re-imagining Algorithmic Fairness in India and Beyond
Nithya Sambasivan
Erin Arnesen
Ben Hutchinson
Tulsee Doshi
Vinodkumar Prabhakaran
FaML
17
174
0
25 Jan 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
11
88
0
24 Jan 2021
Censorship of Online Encyclopedias: Implications for NLP Models
Eddie Yang
Margaret E. Roberts
21
16
0
22 Jan 2021
Understanding the Tradeoffs in Client-side Privacy for Downstream Speech Tasks
Peter Wu
Paul Pu Liang
Jiatong Shi
Ruslan Salakhutdinov
Shinji Watanabe
Louis-Philippe Morency
26
8
0
22 Jan 2021
Investigating Memorization of Conspiracy Theories in Text Generation
Sharon Levy
Michael Stephen Saxon
Luu Anh Tuan
11
18
0
02 Jan 2021
Breeding Gender-aware Direct Speech Translation Systems
Marco Gaido
Beatrice Savoldi
L. Bentivogli
Matteo Negri
Marco Turchi
40
20
0
09 Dec 2020
Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately
Fereshte Khani
Percy Liang
FaML
13
65
0
07 Dec 2020
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability
Sunipa Dev
29
1
0
25 Nov 2020
Unequal Representations: Analyzing Intersectional Biases in Word Embeddings Using Representational Similarity Analysis
Michael A. Lepori
40
12
0
24 Nov 2020
Argument from Old Man's View: Assessing Social Bias in Argumentation
Maximilian Spliethover
Henning Wachsmuth
6
20
0
24 Nov 2020
Automatic Detection of Machine Generated Text: A Critical Survey
Ganesh Jawahar
Muhammad Abdul-Mageed
L. Lakshmanan
DeLMO
11
229
0
02 Nov 2020
Evaluating Bias In Dutch Word Embeddings
Rodrigo Alejandro Chávez Mulsa
Gerasimos Spanakis
18
20
0
31 Oct 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
22
123
0
27 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
255
426
0
15 Oct 2020
Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It
Danielle Saunders
Rosie Sallis
Bill Byrne
19
63
0
11 Oct 2020
Case Study: Deontological Ethics in NLP
Shrimai Prabhumoye
Brendon Boldt
Ruslan Salakhutdinov
A. Black
AILaw
16
27
0
09 Oct 2020
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
UnQovering Stereotyping Biases via Underspecified Questions
Tao Li
Tushar Khot
Daniel Khashabi
Ashish Sabharwal
Vivek Srikumar
17
131
0
06 Oct 2020
Fairness in Machine Learning: A Survey
Simon Caton
C. Haas
FaML
32
615
0
04 Oct 2020
Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness
Lingjuan Lyu
Xuanli He
Yitong Li
35
89
0
03 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
Previous
1
2
3
4
5
6
Next