ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03862
  4. Cited By
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

9 March 2019
Hila Gonen
Yoav Goldberg
ArXivPDFHTML

Papers citing "Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them"

50 / 307 papers shown
Title
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
44
157
0
27 May 2020
MT-Adapted Datasheets for Datasets: Template and Repository
MT-Adapted Datasheets for Datasets: Template and Repository
Marta R. Costa-jussá
Roger Creus
Oriol Domingo
A. Domínguez
Miquel Escobar
Cayetana López
Marina Garcia
Margarita Geleta
23
12
0
27 May 2020
Towards classification parity across cohorts
Towards classification parity across cohorts
Aarsh Patel
Rahul Gupta
Mukund Sridhar
Satyapriya Krishna
Aman Alok
Peng Liu
FaML
23
1
0
16 May 2020
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective
  Learning
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning
Procheta Sen
Debasis Ganguly
27
18
0
14 May 2020
Deep Learning for Political Science
Deep Learning for Political Science
Kakia Chatsiou
Slava Jankin
AI4CE
34
12
0
13 May 2020
Machine Learning on Graphs: A Model and Comprehensive Taxonomy
Machine Learning on Graphs: A Model and Comprehensive Taxonomy
Ines Chami
Sami Abu-El-Haija
Bryan Perozzi
Christopher Ré
Kevin Patrick Murphy
22
285
0
07 May 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer
Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer
Jieyu Zhao
Subhabrata Mukherjee
Saghar Hosseini
Kai-Wei Chang
Ahmed Hassan Awadallah
14
88
0
02 May 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
22
122
0
01 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
42
207
0
01 May 2020
When do Word Embeddings Accurately Reflect Surveys on our Beliefs About
  People?
When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?
K. Joseph
Jonathan H. Morgan
6
27
0
25 Apr 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
37
957
0
20 Apr 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
23
368
0
16 Apr 2020
Reducing Gender Bias in Neural Machine Translation as a Domain
  Adaptation Problem
Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem
Danielle Saunders
Bill Byrne
AI4CE
24
137
0
09 Apr 2020
Neutralizing Gender Bias in Word Embedding with Latent Disentanglement
  and Counterfactual Generation
Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation
Seung-Jae Shin
Kyungwoo Song
Joonho Jang
Hyemi Kim
Weonyoung Joo
Il-Chul Moon
30
20
0
07 Apr 2020
"You are grounded!": Latent Name Artifacts in Pre-trained Language
  Models
"You are grounded!": Latent Name Artifacts in Pre-trained Language Models
Vered Shwartz
Rachel Rudinger
Oyvind Tafjord
14
50
0
06 Apr 2020
Bias in Machine Learning -- What is it Good for?
Bias in Machine Learning -- What is it Good for?
Thomas Hellström
Virginia Dignum
Suna Bensch
AI4CE
FaML
14
3
0
01 Apr 2020
On the Integration of LinguisticFeatures into Statistical and Neural
  Machine Translation
On the Integration of LinguisticFeatures into Statistical and Neural Machine Translation
Eva Vanmassenhove
11
0
0
31 Mar 2020
FrameAxis: Characterizing Microframe Bias and Intensity with Word
  Embedding
FrameAxis: Characterizing Microframe Bias and Intensity with Word Embedding
Haewoon Kwak
Jisun An
Elise Jing
Yong-Yeol Ahn
19
44
0
20 Feb 2020
Algorithmic Fairness
Algorithmic Fairness
Dana Pessach
E. Shmueli
FaML
33
388
0
21 Jan 2020
Measuring Social Bias in Knowledge Graph Embeddings
Measuring Social Bias in Knowledge Graph Embeddings
Joseph Fisher
Dave Palfrey
Christos Christodoulopoulos
Arpit Mittal
FaML
15
36
0
05 Dec 2019
A Causal Inference Method for Reducing Gender Bias in Word Embedding
  Relations
A Causal Inference Method for Reducing Gender Bias in Word Embedding Relations
Zekun Yang
Juan Feng
FaML
8
36
0
25 Nov 2019
Automatically Neutralizing Subjective Bias in Text
Automatically Neutralizing Subjective Bias in Text
Reid Pryzant
Richard Diehl Martinez
Nathan Dass
Sadao Kurohashi
Dan Jurafsky
Diyi Yang
30
175
0
21 Nov 2019
Correcting Sociodemographic Selection Biases for Population Prediction
  from Social Media
Correcting Sociodemographic Selection Biases for Population Prediction from Social Media
Salvatore Giorgi
Veronica E. Lynn
Keshav Gupta
F. Ahmed
S. Matz
Lyle Ungar
H. Andrew Schwartz
9
24
0
10 Nov 2019
Predictive Biases in Natural Language Processing Models: A Conceptual
  Framework and Overview
Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
Deven Santosh Shah
H. Andrew Schwartz
Dirk Hovy
AI4CE
27
258
0
09 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
27
223
0
04 Nov 2019
Probabilistic Bias Mitigation in Word Embeddings
Probabilistic Bias Mitigation in Word Embeddings
Hailey James
David Alvarez-Melis
9
4
0
31 Oct 2019
How does Grammatical Gender Affect Noun Representations in
  Gender-Marking Languages?
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?
Hila Gonen
Yova Kementchedjhieva
Yoav Goldberg
FaML
8
22
0
30 Oct 2019
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Vinodkumar Prabhakaran
Ben Hutchinson
Margaret Mitchell
22
117
0
09 Oct 2019
Analysing Neural Language Models: Contextual Decomposition Reveals
  Default Reasoning in Number and Gender Assignment
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
Jaap Jumelet
Willem H. Zuidema
Dieuwke Hupkes
LRM
33
37
0
19 Sep 2019
Decision-Directed Data Decomposition
Decision-Directed Data Decomposition
Brent D. Davis
Ethan Jackson
D. Lizotte
22
2
0
18 Sep 2019
A General Framework for Implicit and Explicit Debiasing of
  Distributional Word Vector Spaces
A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
Anne Lauscher
Goran Glavas
Simone Paolo Ponzetto
Ivan Vulić
29
62
0
13 Sep 2019
Neural Embedding Allocation: Distributed Representations of Topic Models
Neural Embedding Allocation: Distributed Representations of Topic Models
Kamrun Naher Keya
Yannis Papanikolaou
James R. Foulds
BDL
19
5
0
10 Sep 2019
Examining Gender Bias in Languages with Grammatical Gender
Examining Gender Bias in Languages with Grammatical Gender
Pei Zhou
Weijia Shi
Jieyu Zhao
Kuan-Hao Huang
Muhao Chen
Ryan Cotterell
Kai-Wei Chang
16
103
0
05 Sep 2019
Interpretable Word Embeddings via Informative Priors
Interpretable Word Embeddings via Informative Priors
Miriam Hurtado Bodell
Martin Arvidsson
Måns Magnusson
27
18
0
03 Sep 2019
It's All in the Name: Mitigating Gender Bias with Name-Based
  Counterfactual Data Substitution
It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution
Rowan Hall Maudslay
Hila Gonen
Ryan Cotterell
Simone Teufel
11
168
0
02 Sep 2019
Rotate King to get Queen: Word Relationships as Orthogonal
  Transformations in Embedding Space
Rotate King to get Queen: Word Relationships as Orthogonal Transformations in Embedding Space
Kawin Ethayarajh
LLMSV
13
13
0
02 Sep 2019
Automatically Inferring Gender Associations from Language
Automatically Inferring Gender Associations from Language
Serina Chang
Kathleen McKeown
FaML
8
11
0
30 Aug 2019
Unlearn Dataset Bias in Natural Language Inference by Fitting the
  Residual
Unlearn Dataset Bias in Natural Language Inference by Fitting the Residual
He He
Sheng Zha
Haohan Wang
22
197
0
28 Aug 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
335
4,223
0
23 Aug 2019
Understanding Undesirable Word Embedding Associations
Understanding Undesirable Word Embedding Associations
Kawin Ethayarajh
David Duvenaud
Graeme Hirst
FaML
11
124
0
18 Aug 2019
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost
Nithum Thain
Tolga Bolukbasi
FaML
14
50
0
07 Aug 2019
Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness
  Regularization in Machine Learning
Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning
Yair Horesh
N. Haas
Elhanan Mishraky
Yehezkel S. Resheff
Shir Meir Lador
FaML
14
7
0
07 Aug 2019
Using Word Embeddings to Examine Gender Bias in Dutch Newspapers,
  1950-1990
Using Word Embeddings to Examine Gender Bias in Dutch Newspapers, 1950-1990
M. Wevers
13
32
0
21 Jul 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
27
46
0
24 Jun 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
47
542
0
21 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
22
33
0
14 Jun 2019
Tracing Antisemitic Language Through Diachronic Embedding Projections:
  France 1789-1914
Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914
Rocco Tripodi
M. Warglien
S. Sullam
Deborah Paci
LLMSV
6
20
0
04 Jun 2019
Evaluating Gender Bias in Machine Translation
Evaluating Gender Bias in Machine Translation
Gabriel Stanovsky
Noah A. Smith
Luke Zettlemoyer
19
393
0
03 Jun 2019
Reducing Gender Bias in Word-Level Language Models with a
  Gender-Equalizing Loss Function
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
Yusu Qian
Urwa Muaz
Ben Zhang
J. Hyun
FaML
19
96
0
30 May 2019
Previous
1234567
Next