ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03862
  4. Cited By
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

9 March 2019
Hila Gonen
Yoav Goldberg
ArXivPDFHTML

Papers citing "Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them"

50 / 307 papers shown
Title
Debiasing Vision-Language Models via Biased Prompts
Debiasing Vision-Language Models via Biased Prompts
Ching-Yao Chuang
Varun Jampani
Yuanzhen Li
Antonio Torralba
Stefanie Jegelka
VLM
30
97
0
31 Jan 2023
How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text
  Classification
How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
E. Tokpo
Pieter Delobelle
Bettina Berendt
T. Calders
43
7
0
30 Jan 2023
Counteracts: Testing Stereotypical Representation in Pre-trained
  Language Models
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models
Damin Zhang
Julia Taylor Rayz
Romila Pradhan
42
2
0
11 Jan 2023
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
27
10
0
20 Dec 2022
On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in
  Zero-Shot Reasoning
On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning
Omar Shaikh
Hongxin Zhang
William B. Held
Michael S. Bernstein
Diyi Yang
ReLM
LRM
35
184
0
15 Dec 2022
Unsupervised Detection of Contextualized Embedding Bias with Application
  to Ideology
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology
Valentin Hofmann
J. Pierrehumbert
Hinrich Schütze
36
0
0
14 Dec 2022
Better Hit the Nail on the Head than Beat around the Bush: Removing
  Protected Attributes with a Single Projection
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
P. Haghighatkhah
Antske Fokkens
Pia Sommerauer
Bettina Speckmann
Kevin Verbeek
32
10
0
08 Dec 2022
Undesirable Biases in NLP: Addressing Challenges of Measurement
Undesirable Biases in NLP: Addressing Challenges of Measurement
Oskar van der Wal
Dominik Bachmann
Alina Leidinger
L. Maanen
Willem H. Zuidema
K. Schulz
25
6
0
24 Nov 2022
Conceptor-Aided Debiasing of Large Language Models
Conceptor-Aided Debiasing of Large Language Models
Yifei Li
Lyle Ungar
João Sedoc
14
4
0
20 Nov 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
28
6
0
15 Nov 2022
Does Debiasing Inevitably Degrade the Model Performance
Does Debiasing Inevitably Degrade the Model Performance
Yiran Liu
Xiao-Yang Liu
Haotian Chen
Yang Yu
38
2
0
14 Nov 2022
Bridging Fairness and Environmental Sustainability in Natural Language
  Processing
Bridging Fairness and Environmental Sustainability in Natural Language Processing
Marius Hessenthaler
Emma Strubell
Dirk Hovy
Anne Lauscher
24
8
0
08 Nov 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
Systematic Evaluation of Predictive Fairness
Systematic Evaluation of Predictive Fairness
Xudong Han
Aili Shen
Trevor Cohn
Timothy Baldwin
Lea Frermann
32
7
0
17 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu-Xiang Wang
Julian McAuley
Bodhisattwa Prasad Majumder
27
19
0
14 Oct 2022
InterFair: Debiasing with Natural Language Feedback for Fair
  Interpretable Predictions
InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions
Bodhisattwa Prasad Majumder
Zexue He
Julian McAuley
21
5
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
30
25
0
13 Oct 2022
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype
  Content Model
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype Content Model
Ali Omrani
Brendan Kennedy
M. Atari
Morteza Dehghani
29
1
0
11 Oct 2022
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
Angelie Kraft
Ricardo Usbeck
KELM
32
9
0
07 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and
  their Social Biases in Downstream Tasks
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
28
41
0
06 Oct 2022
Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
Clara Rus
Jeffrey Luppes
Harrie Oosterhuis
Gido Schoenmacker
FaML
54
12
0
20 Sep 2022
Efficient Gender Debiasing of Pre-trained Indic Language Models
Efficient Gender Debiasing of Pre-trained Indic Language Models
Neeraja Kirtane
V. Manushree
Aditya Kane
16
3
0
08 Sep 2022
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
24
5
0
29 Aug 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
32
3
0
14 Jul 2022
Word Embedding for Social Sciences: An Interdisciplinary Survey
Word Embedding for Social Sciences: An Interdisciplinary Survey
Akira Matsui
Emilio Ferrara
21
5
0
07 Jul 2022
Don't Forget About Pronouns: Removing Gender Bias in Language Models
  Without Losing Factual Gender Information
Don't Forget About Pronouns: Removing Gender Bias in Language Models Without Losing Factual Gender Information
Tomasz Limisiewicz
David Marecek
14
18
0
21 Jun 2022
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency,
  Syntax, and Semantics
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Aylin Caliskan
Pimparkar Parth Ajay
Tessa E. S. Charlesworth
Robert Wolfe
M. Banaji
CVBM
FaML
35
50
0
07 Jun 2022
Toward Understanding Bias Correlations for Mitigation in NLP
Toward Understanding Bias Correlations for Mitigation in NLP
Lu Cheng
Suyu Ge
Huan Liu
39
8
0
24 May 2022
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
Conrad Borchers
Dalia Sara Gala
Ben Gilburt
Eduard Oravkin
Wilfried Bounsi
Yuki M. Asano
Hannah Rose Kirk
AI4CE
27
27
0
23 May 2022
How sensitive are translation systems to extra contexts? Mitigating
  gender bias in Neural Machine Translation models through relevant contexts
How sensitive are translation systems to extra contexts? Mitigating gender bias in Neural Machine Translation models through relevant contexts
Shanya Sharma
Manan Dey
Koustuv Sinha
24
12
0
22 May 2022
How to keep text private? A systematic review of deep learning methods
  for privacy-preserving natural language processing
How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing
Samuel Sousa
Roman Kern
PILM
AILaw
25
40
0
20 May 2022
Gender Bias in Meta-Embeddings
Gender Bias in Meta-Embeddings
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
36
6
0
19 May 2022
Exploiting Social Media Content for Self-Supervised Style Transfer
Exploiting Social Media Content for Self-Supervised Style Transfer
Dana Ruiter
Thomas Kleinbauer
C. España-Bonet
Josef van Genabith
Dietrich Klakow
36
2
0
18 May 2022
Towards Debiasing Translation Artifacts
Towards Debiasing Translation Artifacts
Koel Dutta Chowdhury
Rricha Jalota
C. España-Bonet
Josef van Genabith
31
6
0
16 May 2022
Fair NLP Models with Differentially Private Text Encoders
Fair NLP Models with Differentially Private Text Encoders
Gaurav Maheshwari
Pascal Denis
Mikaela Keller
A. Bellet
FedML
SILM
36
15
0
12 May 2022
Mitigating Gender Stereotypes in Hindi and Marathi
Mitigating Gender Stereotypes in Hindi and Marathi
Neeraja Kirtane
Tanvi Anand
32
8
0
12 May 2022
Theories of "Gender" in NLP Bias Research
Theories of "Gender" in NLP Bias Research
Hannah Devinney
Jenny Björklund
H. Björklund
AI4CE
20
66
0
05 May 2022
User-Centric Gender Rewriting
User-Centric Gender Rewriting
Bashar Alhafni
Nizar Habash
Houda Bouamor
23
10
0
04 May 2022
Visualizing and Explaining Language Models
Visualizing and Explaining Language Models
Adrian M. P. Braşoveanu
Razvan Andonie
MILM
VLM
29
4
0
30 Apr 2022
QRelScore: Better Evaluating Generated Questions with Deeper
  Understanding of Context-aware Relevance
QRelScore: Better Evaluating Generated Questions with Deeper Understanding of Context-aware Relevance
Xiaoqiang Wang
Bang Liu
Siliang Tang
Lingfei Wu
30
9
0
29 Apr 2022
Balancing Fairness and Accuracy in Sentiment Detection using Multiple
  Black Box Models
Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models
Abdulaziz A. Almuzaini
V. Singh
MLAU
FaML
36
6
0
22 Apr 2022
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language
  Models: A Survey with Special Emphasis on Affective Bias
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias
Anoop Kadan
Manjary P.Gangan
Deepak P
L. LajishV.
AI4CE
40
10
0
21 Apr 2022
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
26
18
0
14 Apr 2022
Generating Full Length Wikipedia Biographies: The Impact of Gender Bias
  on the Retrieval-Based Generation of Women Biographies
Generating Full Length Wikipedia Biographies: The Impact of Gender Bias on the Retrieval-Based Generation of Women Biographies
Angela Fan
Claire Gardent
27
4
0
12 Apr 2022
Fair and Argumentative Language Modeling for Computational Argumentation
Fair and Argumentative Language Modeling for Computational Argumentation
Carolin Holtermann
Anne Lauscher
Simone Paolo Ponzetto
19
21
0
08 Apr 2022
Mapping the Multilingual Margins: Intersectional Biases of Sentiment
  Analysis Systems in English, Spanish, and Arabic
Mapping the Multilingual Margins: Intersectional Biases of Sentiment Analysis Systems in English, Spanish, and Arabic
Antonio Camara
Nina Taneja
Tamjeed Azad
Emily Allaway
R. Zemel
21
21
0
07 Apr 2022
Question Generation for Reading Comprehension Assessment by Modeling How
  and What to Ask
Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask
Bilal Ghanem
Lauren Lutz Coleman
Julia Rivard Dexter
Spencer McIntosh von der Ohe
Alona Fyshe
AI4Ed
25
27
0
06 Apr 2022
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models
  with Adversarial Learning
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning
Hugo Elias Berg
S. Hall
Yash Bhalgat
Wonsuk Yang
Hannah Rose Kirk
Aleksandar Shtedritski
Max Bain
VLM
25
99
0
22 Mar 2022
Gold Doesn't Always Glitter: Spectral Removal of Linear and Nonlinear
  Guarded Attribute Information
Gold Doesn't Always Glitter: Spectral Removal of Linear and Nonlinear Guarded Attribute Information
Shun Shao
Yftah Ziser
Shay B. Cohen
AAML
6
25
0
15 Mar 2022
A Survey on Bias and Fairness in Natural Language Processing
A Survey on Bias and Fairness in Natural Language Processing
Rajas Bansal
SyDa
17
14
0
06 Mar 2022
Previous
1234567
Next