ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.01496
  4. Cited By
Learning Gender-Neutral Word Embeddings

Learning Gender-Neutral Word Embeddings

29 August 2018
Jieyu Zhao
Yichao Zhou
Zeyu Li
Wei Wang
Kai-Wei Chang
    FaML
ArXivPDFHTML

Papers citing "Learning Gender-Neutral Word Embeddings"

25 / 75 papers shown
Title
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
32
122
0
01 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
53
207
0
01 May 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
44
370
0
16 Apr 2020
Joint Multiclass Debiasing of Word Embeddings
Joint Multiclass Debiasing of Word Embeddings
Radovan Popović
Florian Lemmerich
M. Strohmaier
FaML
24
6
0
09 Mar 2020
Algorithmic Fairness
Algorithmic Fairness
Dana Pessach
E. Shmueli
FaML
33
386
0
21 Jan 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
43
206
0
10 Nov 2019
Towards Understanding Gender Bias in Relation Extraction
Towards Understanding Gender Bias in Relation Extraction
Andrew Gaut
Tony Sun
Shirlyn Tang
Yuxin Huang
Jing Qian
...
Jieyu Zhao
Diba Mirza
E. Belding
Kai-Wei Chang
William Yang Wang
FaML
33
40
0
09 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
27
224
0
04 Nov 2019
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference Resolution
Yang Trista Cao
Hal Daumé
31
141
0
30 Oct 2019
Man is to Person as Woman is to Location: Measuring Gender Bias in Named
  Entity Recognition
Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Ninareh Mehrabi
Thamme Gowda
Fred Morstatter
Nanyun Peng
Aram Galstyan
15
56
0
24 Oct 2019
A General Framework for Implicit and Explicit Debiasing of
  Distributional Word Vector Spaces
A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
Anne Lauscher
Goran Glavaš
Simone Paolo Ponzetto
Ivan Vulić
34
61
0
13 Sep 2019
Interpretable Word Embeddings via Informative Priors
Interpretable Word Embeddings via Informative Priors
Miriam Hurtado Bodell
Martin Arvidsson
Måns Magnusson
35
18
0
03 Sep 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
352
4,237
0
23 Aug 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
27
46
0
24 Jun 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
47
543
0
21 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
63
444
0
18 Jun 2019
Sentiment analysis is not solved! Assessing and probing sentiment
  classification
Sentiment analysis is not solved! Assessing and probing sentiment classification
Jeremy Barnes
Lilja Øvrelid
Erik Velldal
19
32
0
13 Jun 2019
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Christine Basta
Marta R. Costa-jussá
Noe Casas
24
189
0
18 Apr 2019
Analytical Methods for Interpretable Ultradense Word Embeddings
Analytical Methods for Interpretable Ultradense Word Embeddings
Philipp Dufter
Hinrich Schütze
37
25
0
18 Apr 2019
Gender Bias in Contextualized Word Embeddings
Gender Bias in Contextualized Word Embeddings
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
FaML
42
417
0
05 Apr 2019
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
48
323
0
05 Apr 2019
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
Hila Gonen
Yoav Goldberg
60
567
0
09 Mar 2019
Equalizing Gender Biases in Neural Machine Translation with Word
  Embeddings Techniques
Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques
Joel Escudé Font
Marta R. Costa-jussá
18
168
0
10 Jan 2019
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias
  in Deep Image Representations
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
Tianlu Wang
Jieyu Zhao
Mark Yatskar
Kai-Wei Chang
Vicente Ordonez
FaML
34
17
0
20 Nov 2018
Previous
12