ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03862
  4. Cited By
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

9 March 2019
Hila Gonen
Yoav Goldberg
ArXivPDFHTML

Papers citing "Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them"

50 / 307 papers shown
Title
Improving Gender Translation Accuracy with Filtered Self-Training
Improving Gender Translation Accuracy with Filtered Self-Training
Prafulla Kumar Choubey
Anna Currey
Prashant Mathur
Georgiana Dinu
20
10
0
15 Apr 2021
[RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias
  Mitigation
[RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Haswanth Aekula
Sugam Garg
Animesh Gupta
CVBM
6
4
0
14 Apr 2021
On the Interpretability and Significance of Bias Metrics in Texts: a
  PMI-based Approach
On the Interpretability and Significance of Bias Metrics in Texts: a PMI-based Approach
Francisco Valentini
Germán Rosati
Damián E. Blasi
D. Slezak
Edgar Altszyler
22
3
0
13 Apr 2021
Gender Bias in Machine Translation
Gender Bias in Machine Translation
Beatrice Savoldi
Marco Gaido
L. Bentivogli
Matteo Negri
Marco Turchi
64
192
0
13 Apr 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word
  Representations
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations
Archit Rathore
Sunipa Dev
J. M. Phillips
Vivek Srikumar
Yan Zheng
Chin-Chia Michael Yeh
Junpeng Wang
Wei Zhang
Bei Wang
30
10
0
06 Apr 2021
What Will it Take to Fix Benchmarking in Natural Language Understanding?
What Will it Take to Fix Benchmarking in Natural Language Understanding?
Samuel R. Bowman
George E. Dahl
ELM
ALM
30
156
0
05 Apr 2021
Gender and Racial Fairness in Depression Research using Social Media
Gender and Racial Fairness in Depression Research using Social Media
Carlos Alejandro Aguirre
Keith Harrigian
Mark Dredze
22
37
0
18 Mar 2021
DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding
  Spaces
DebIE: A Platform for Implicit and Explicit Debiasing of Word Embedding Spaces
Niklas Friedrich
Anne Lauscher
Simone Paolo Ponzetto
Goran Glavavs
33
7
0
11 Mar 2021
Interpretable bias mitigation for textual data: Reducing gender bias in
  patient notes while maintaining classification performance
Interpretable bias mitigation for textual data: Reducing gender bias in patient notes while maintaining classification performance
J. Minot
N. Cheney
Marc E. Maier
Danne C. Elbers
C. Danforth
P. Dodds
FaML
28
3
0
10 Mar 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
262
374
0
28 Feb 2021
They, Them, Theirs: Rewriting with Gender-Neutral English
They, Them, Theirs: Rewriting with Gender-Neutral English
Tony Sun
Kellie Webster
Apurva Shah
William Yang Wang
Melvin Johnson
27
59
0
12 Feb 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
22
179
0
08 Feb 2021
From Toxicity in Online Comments to Incivility in American News: Proceed
  with Caution
From Toxicity in Online Comments to Incivility in American News: Proceed with Caution
A. Hede
Oshin Agarwal
L. Lu
Diana C. Mutz
A. Nenkova
10
10
0
06 Feb 2021
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Zeerak Talat
Smarika Lulz
Joachim Bingel
Isabelle Augenstein
96
51
0
28 Jan 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
11
88
0
24 Jan 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
38
39
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
218
137
0
23 Jan 2021
Censorship of Online Encyclopedias: Implications for NLP Models
Censorship of Online Encyclopedias: Implications for NLP Models
Eddie Yang
Margaret E. Roberts
24
16
0
22 Jan 2021
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Seraphina Goldfarb-Tarrant
Rebecca Marchant
Ricardo Muñoz Sánchez
Mugdha Pandya
Adam Lopez
19
171
0
31 Dec 2020
Model Choices Influence Attributive Word Associations: A Semi-supervised
  Analysis of Static Word Embeddings
Model Choices Influence Attributive Word Associations: A Semi-supervised Analysis of Static Word Embeddings
Geetanjali Bihani
Julia Taylor Rayz
27
2
0
14 Dec 2020
The Geometry of Distributed Representations for Better Alignment,
  Attenuated Bias, and Improved Interpretability
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability
Sunipa Dev
29
1
0
25 Nov 2020
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal
  Clinical NLP
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP
John Chen
Ian Berlot-Attwell
Safwan Hossain
Xindi Wang
Frank Rudzicz
FaML
37
7
0
19 Nov 2020
How to Measure Gender Bias in Machine Translation: Optimal Translators,
  Multiple Reference Points
How to Measure Gender Bias in Machine Translation: Optimal Translators, Multiple Reference Points
A. Farkas
Renáta Németh
6
8
0
12 Nov 2020
Situated Data, Situated Systems: A Methodology to Engage with Power
  Relations in Natural Language Processing Research
Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research
Lucy Havens
Melissa Mhairi Terras
Benjamin Bach
Beatrice Alex
11
21
0
11 Nov 2020
On the State of Social Media Data for Mental Health Research
On the State of Social Media Data for Mental Health Research
Keith Harrigian
Carlos Alejandro Aguirre
Mark Dredze
AI4MH
18
49
0
10 Nov 2020
Investigating Societal Biases in a Poetry Composition System
Investigating Societal Biases in a Poetry Composition System
Emily Sheng
David C. Uthus
21
52
0
05 Nov 2020
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings
AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings
Anne Lauscher
Rafik Takieddin
Simone Paolo Ponzetto
Goran Glavas
14
27
0
03 Nov 2020
Evaluating Bias In Dutch Word Embeddings
Evaluating Bias In Dutch Word Embeddings
Rodrigo Alejandro Chávez Mulsa
Gerasimos Spanakis
18
20
0
31 Oct 2020
"Thy algorithm shalt not bear false witness": An Evaluation of
  Multiclass Debiasing Methods on Word Embeddings
"Thy algorithm shalt not bear false witness": An Evaluation of Multiclass Debiasing Methods on Word Embeddings
Thalea Schlender
Gerasimos Spanakis
11
3
0
30 Oct 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
28
123
0
27 Oct 2020
Discovering and Interpreting Biased Concepts in Online Communities
Discovering and Interpreting Biased Concepts in Online Communities
Xavier Ferrer-Aran
Tom van Nuenen
Natalia Criado
Jose Such
16
2
0
27 Oct 2020
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender
  Bias in Word Embeddings
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings
Vaibhav Kumar
Tenzin Singhay Bhotia
Vaibhav Kumar
FaML
13
2
0
25 Oct 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
18
250
0
12 Oct 2020
Robustness and Reliability of Gender Bias Assessment in Word Embeddings:
  The Role of Base Pairs
Robustness and Reliability of Gender Bias Assessment in Word Embeddings: The Role of Base Pairs
Haiyang Zhang
Alison Sneyd
Mark Stevenson
19
14
0
06 Oct 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
Which *BERT? A Survey Organizing Contextualized Encoders
Which *BERT? A Survey Organizing Contextualized Encoders
Patrick Xia
Shijie Wu
Benjamin Van Durme
26
50
0
02 Oct 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
22
645
0
30 Sep 2020
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Francisco Vargas
Ryan Cotterell
41
29
0
20 Sep 2020
Investigating Gender Bias in BERT
Investigating Gender Bias in BERT
Rishabh Bhardwaj
Navonil Majumder
Soujanya Poria
33
106
0
10 Sep 2020
Going Beyond T-SNE: Exposing \texttt{whatlies} in Text Embeddings
Going Beyond T-SNE: Exposing \texttt{whatlies} in Text Embeddings
Vincent D. Warmerdam
Thomas Kober
Rachael Tatman
6
6
0
04 Sep 2020
Assessing Demographic Bias in Named Entity Recognition
Assessing Demographic Bias in Named Entity Recognition
Shubhanshu Mishra
Sijun He
Luca Belli
12
46
0
08 Aug 2020
Defining and Evaluating Fair Natural Language Generation
Defining and Evaluating Fair Natural Language Generation
C. Yeo
A. Chen
6
23
0
28 Jul 2020
Towards Debiasing Sentence Representations
Towards Debiasing Sentence Representations
Paul Pu Liang
Irene Z Li
Emily Zheng
Y. Lim
Ruslan Salakhutdinov
Louis-Philippe Morency
18
230
0
16 Jul 2020
Cultural Cartography with Word Embeddings
Cultural Cartography with Word Embeddings
Dustin S. Stoltz
Marshall A. Taylor
23
38
0
09 Jul 2020
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in
  Word Embeddings
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
13
54
0
30 Jun 2020
Large image datasets: A pyrrhic win for computer vision?
Large image datasets: A pyrrhic win for computer vision?
Vinay Uday Prabhu
Abeba Birhane
19
358
0
24 Jun 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings
  Contain a Distribution of Human-like Biases
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
16
233
0
06 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
34
215
0
05 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
71
40,200
0
28 May 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
53
1,191
0
28 May 2020
Previous
1234567
Next