ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.06361
  4. Cited By
Understanding Undesirable Word Embedding Associations

Understanding Undesirable Word Embedding Associations

18 August 2019
Kawin Ethayarajh
David Duvenaud
Graeme Hirst
    FaML
ArXivPDFHTML

Papers citing "Understanding Undesirable Word Embedding Associations"

36 / 36 papers shown
Title
Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring Contexts
Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring Contexts
Preethi Seshadri
Seraphina Goldfarb-Tarrant
40
1
0
08 Jan 2025
Measuring Social Biases in Masked Language Models by Proxy of Prediction Quality
Measuring Social Biases in Masked Language Models by Proxy of Prediction Quality
Rahul Zalkikar
Kanchan Chandra
37
1
0
21 Feb 2024
Learning to Generate Equitable Text in Dialogue from Biased Training
  Data
Learning to Generate Equitable Text in Dialogue from Biased Training Data
Anthony Sicilia
Malihe Alikhani
49
15
0
10 Jul 2023
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
32
10
0
20 Dec 2022
Better Hit the Nail on the Head than Beat around the Bush: Removing
  Protected Attributes with a Single Projection
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
P. Haghighatkhah
Antske Fokkens
Pia Sommerauer
Bettina Speckmann
Kevin Verbeek
32
10
0
08 Dec 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
28
6
0
15 Nov 2022
No Word Embedding Model Is Perfect: Evaluating the Representation
  Accuracy for Social Bias in the Media
No Word Embedding Model Is Perfect: Evaluating the Representation Accuracy for Social Bias in the Media
Maximilian Spliethover
Maximilian Keiff
Henning Wachsmuth
26
4
0
07 Nov 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
Large scale analysis of gender bias and sexism in song lyrics
Large scale analysis of gender bias and sexism in song lyrics
L. Betti
Carlo Abrate
Andreas Kaltenbrunner
36
18
0
03 Aug 2022
The Birth of Bias: A case study on the evolution of gender bias in an
  English language model
The Birth of Bias: A case study on the evolution of gender bias in an English language model
Oskar van der Wal
Jaap Jumelet
K. Schulz
Willem H. Zuidema
32
16
0
21 Jul 2022
[Re] Badder Seeds: Reproducing the Evaluation of Lexical Methods for
  Bias Measurement
[Re] Badder Seeds: Reproducing the Evaluation of Lexical Methods for Bias Measurement
Jille van der Togt
Lea Tiyavorabun
Matteo Rosati
Giulio Starace
7
0
0
03 Jun 2022
Mitigating Gender Stereotypes in Hindi and Marathi
Mitigating Gender Stereotypes in Hindi and Marathi
Neeraja Kirtane
Tanvi Anand
32
8
0
12 May 2022
Richer Countries and Richer Representations
Richer Countries and Richer Representations
Kaitlyn Zhou
Kawin Ethayarajh
Dan Jurafsky
46
9
0
10 May 2022
Problems with Cosine as a Measure of Embedding Similarity for High
  Frequency Words
Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words
Kaitlyn Zhou
Kawin Ethayarajh
Dallas Card
Dan Jurafsky
41
66
0
10 May 2022
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
26
18
0
14 Apr 2022
Sense Embeddings are also Biased--Evaluating Social Biases in Static and
  Contextualised Sense Embeddings
Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings
Yi Zhou
Masahiro Kaneko
Danushka Bollegala
29
23
0
14 Mar 2022
Regional Negative Bias in Word Embeddings Predicts Racial Animus--but
  only via Name Frequency
Regional Negative Bias in Word Embeddings Predicts Racial Animus--but only via Name Frequency
Austin Van Loon
Salvatore Giorgi
Robb Willer
J. Eichstaedt
42
10
0
20 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
30
110
0
28 Dec 2021
Residual2Vec: Debiasing graph embedding with random graphs
Residual2Vec: Debiasing graph embedding with random graphs
Sadamori Kojaku
Jisung Yoon
I. Constantino
Yong-Yeol Ahn
CML
35
23
0
14 Oct 2021
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
95
51
0
01 Oct 2021
Assessing the Reliability of Word Embedding Gender Bias Measures
Assessing the Reliability of Word Embedding Gender Bias Measures
Yupei Du
Qixiang Fang
D. Nguyen
46
21
0
10 Sep 2021
Theoretical foundations and limits of word embeddings: what types of
  meaning can they capture?
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?
Alina Arseniev-Koehler
36
19
0
22 Jul 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
45
178
0
07 Jun 2021
On the Interpretability and Significance of Bias Metrics in Texts: a
  PMI-based Approach
On the Interpretability and Significance of Bias Metrics in Texts: a PMI-based Approach
Francisco Valentini
Germán Rosati
Damián E. Blasi
D. Slezak
Edgar Altszyler
22
3
0
13 Apr 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
38
38
0
23 Jan 2021
Argument from Old Man's View: Assessing Social Bias in Argumentation
Argument from Old Man's View: Assessing Social Bias in Argumentation
Maximilian Spliethover
Henning Wachsmuth
14
20
0
24 Nov 2020
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Kawin Ethayarajh
Dan Jurafsky
ELM
24
51
0
29 Sep 2020
Cultural Cartography with Word Embeddings
Cultural Cartography with Word Embeddings
Dustin S. Stoltz
Marshall A. Taylor
23
38
0
09 Jul 2020
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
Yuhao Du
K. Joseph
19
3
0
20 Jun 2020
ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across
  Languages and Over Centuries
ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries
Autumn Toney
Aylin Caliskan
23
22
0
06 Jun 2020
On the Relationships Between the Grammatical Genders of Inanimate Nouns
  and Their Co-Occurring Adjectives and Verbs
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
Adina Williams
Ryan Cotterell
Lawrence Wolf-Sonkin
Damián E. Blasi
Hanna M. Wallach
34
19
0
03 May 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
22
122
0
01 May 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
26
369
0
16 Apr 2020
Machine learning as a model for cultural learning: Teaching an algorithm
  what it means to be fat
Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat
Alina Arseniev-Koehler
J. Foster
43
46
0
24 Mar 2020
Automatically Neutralizing Subjective Bias in Text
Automatically Neutralizing Subjective Bias in Text
Reid Pryzant
Richard Diehl Martinez
Nathan Dass
Sadao Kurohashi
Dan Jurafsky
Diyi Yang
33
175
0
21 Nov 2019
1