ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.03310
  4. Cited By
Gender Bias in Contextualized Word Embeddings

Gender Bias in Contextualized Word Embeddings

5 April 2019
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
    FaML
ArXivPDFHTML

Papers citing "Gender Bias in Contextualized Word Embeddings"

50 / 241 papers shown
Title
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
24
5
0
29 Aug 2022
The Birth of Bias: A case study on the evolution of gender bias in an
  English language model
The Birth of Bias: A case study on the evolution of gender bias in an English language model
Oskar van der Wal
Jaap Jumelet
K. Schulz
Willem H. Zuidema
32
16
0
21 Jul 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
32
3
0
14 Jul 2022
Gender Biases and Where to Find Them: Exploring Gender Bias in
  Pre-Trained Transformer-based Language Models Using Movement Pruning
Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning
Przemyslaw K. Joniak
Akiko Aizawa
16
27
0
06 Jul 2022
Counterfactually Measuring and Eliminating Social Bias in
  Vision-Language Pre-training Models
Counterfactually Measuring and Eliminating Social Bias in Vision-Language Pre-training Models
Yi Zhang
Junyan Wang
Jitao Sang
24
27
0
03 Jul 2022
Towards Lexical Gender Inference: A Scalable Methodology using Online
  Databases
Towards Lexical Gender Inference: A Scalable Methodology using Online Databases
Marion Bartl
Susan Leavy
36
1
0
28 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency,
  Syntax, and Semantics
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Aylin Caliskan
Pimparkar Parth Ajay
Tessa E. S. Charlesworth
Robert Wolfe
M. Banaji
CVBM
FaML
35
50
0
07 Jun 2022
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Lukas Hauzenberger
Shahed Masoudian
Deepak Kumar
Markus Schedl
Navid Rekabsaz
30
17
0
30 May 2022
DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
  Spreading Out Disinformation
DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Jingnong Qu
Liunian Harold Li
Jieyu Zhao
Sunipa Dev
Kai-Wei Chang
21
12
0
25 May 2022
Toward Understanding Bias Correlations for Mitigation in NLP
Toward Understanding Bias Correlations for Mitigation in NLP
Lu Cheng
Suyu Ge
Huan Liu
39
8
0
24 May 2022
On Measuring Social Biases in Prompt-Based Multi-Task Learning
On Measuring Social Biases in Prompt-Based Multi-Task Learning
Afra Feyza Akyürek
Sejin Paik
Muhammed Yusuf Kocyigit
S. Akbiyik
cSerife Leman Runyun
Derry Wijaya
ALM
41
14
0
23 May 2022
Conditional Supervised Contrastive Learning for Fair Text Classification
Conditional Supervised Contrastive Learning for Fair Text Classification
Jianfeng Chi
Will Shand
Yaodong Yu
Kai-Wei Chang
Han Zhao
Yuan Tian
FaML
49
14
0
23 May 2022
How sensitive are translation systems to extra contexts? Mitigating
  gender bias in Neural Machine Translation models through relevant contexts
How sensitive are translation systems to extra contexts? Mitigating gender bias in Neural Machine Translation models through relevant contexts
Shanya Sharma
Manan Dey
Koustuv Sinha
27
11
0
22 May 2022
Gender Bias in Meta-Embeddings
Gender Bias in Meta-Embeddings
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
36
6
0
19 May 2022
Towards Understanding Gender-Seniority Compound Bias in Natural Language
  Generation
Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation
Samhita Honnavalli
Aesha Parekh
Li-hsueh Ou
Sophie Groenwold
Sharon Levy
Vicente Ordonez
William Yang Wang
28
4
0
19 May 2022
Disentangling Active and Passive Cosponsorship in the U.S. Congress
Disentangling Active and Passive Cosponsorship in the U.S. Congress
Giuseppe Russo
Christoph Gote
L. Brandenberger
Sophia Schlosser
F. Schweitzer
LLMSV
AI4CE
34
7
0
19 May 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
79
130
0
18 May 2022
Theories of "Gender" in NLP Bias Research
Theories of "Gender" in NLP Bias Research
Hannah Devinney
Jenny Björklund
H. Björklund
AI4CE
20
66
0
05 May 2022
Detoxifying Language Models with a Toxic Corpus
Detoxifying Language Models with a Toxic Corpus
Yoon A Park
Frank Rudzicz
27
6
0
30 Apr 2022
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language
  Models: A Survey with Special Emphasis on Affective Bias
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias
Anoop Kadan
Manjary P.Gangan
Deepak P
L. LajishV.
AI4CE
43
10
0
21 Apr 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
38
19
0
20 Apr 2022
Analyzing Gender Representation in Multilingual Models
Analyzing Gender Representation in Multilingual Models
Hila Gonen
Shauli Ravfogel
Yoav Goldberg
25
11
0
20 Apr 2022
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained
  Language Models with Prompts
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts
Apoorv Garg
Deval Srivastava
Zhiyang Xu
Lifu Huang
16
5
0
15 Apr 2022
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
26
18
0
14 Apr 2022
Fair and Argumentative Language Modeling for Computational Argumentation
Fair and Argumentative Language Modeling for Computational Argumentation
Carolin Holtermann
Anne Lauscher
Simone Paolo Ponzetto
24
21
0
08 Apr 2022
Mapping the Multilingual Margins: Intersectional Biases of Sentiment
  Analysis Systems in English, Spanish, and Arabic
Mapping the Multilingual Margins: Intersectional Biases of Sentiment Analysis Systems in English, Spanish, and Arabic
Antonio Camara
Nina Taneja
Tamjeed Azad
Emily Allaway
R. Zemel
21
21
0
07 Apr 2022
On the Intrinsic and Extrinsic Fairness Evaluation Metrics for
  Contextualized Language Representations
On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations
Yang Trista Cao
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
Varun Kumar
Jwala Dhamala
Aram Galstyan
16
92
0
25 Mar 2022
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models
  with Adversarial Learning
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning
Hugo Elias Berg
S. Hall
Yash Bhalgat
Wonsuk Yang
Hannah Rose Kirk
Aleksandar Shtedritski
Max Bain
VLM
25
99
0
22 Mar 2022
On Robust Prefix-Tuning for Text Classification
On Robust Prefix-Tuning for Text Classification
Zonghan Yang
Yang Liu
VLM
26
20
0
19 Mar 2022
Challenges and Strategies in Cross-Cultural NLP
Challenges and Strategies in Cross-Cultural NLP
Daniel Hershcovich
Stella Frank
Heather Lent
Miryam de Lhoneux
Mostafa Abdou
...
Ruixiang Cui
Constanza Fierro
Katerina Margatina
Phillip Rust
Anders Søgaard
43
163
0
18 Mar 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language
  Models
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
31
6
0
10 Mar 2022
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and
  Benchmarks
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Jingyan Zhou
Deng Jiawen
Fei Mi
Yitong Li
Yasheng Wang
Minlie Huang
Xin Jiang
Qun Liu
Helen Meng
33
31
0
16 Feb 2022
Exploring the Limits of Domain-Adaptive Training for Detoxifying
  Large-Scale Language Models
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Wei Ping
Ming-Yu Liu
Chaowei Xiao
P. Xu
M. Patwary
M. Shoeybi
Bo-wen Li
Anima Anandkumar
Bryan Catanzaro
25
65
0
08 Feb 2022
Counterfactual Multi-Token Fairness in Text Classification
Counterfactual Multi-Token Fairness in Text Classification
P. Lohia
21
3
0
08 Feb 2022
LaMDA: Language Models for Dialog Applications
LaMDA: Language Models for Dialog Applications
R. Thoppilan
Daniel De Freitas
Jamie Hall
Noam M. Shazeer
Apoorv Kulshreshtha
...
Blaise Aguera-Arcas
Claire Cui
M. Croak
Ed H. Chi
Quoc Le
ALM
50
1,561
0
20 Jan 2022
Unintended Bias in Language Model-driven Conversational Recommendation
Unintended Bias in Language Model-driven Conversational Recommendation
Tianshu Shen
Jiaru Li
Mohamed Reda Bouadjenek
Zheda Mai
Scott Sanner
17
7
0
17 Jan 2022
Pretty Princess vs. Successful Leader: Gender Roles in Greeting Card
  Messages
Pretty Princess vs. Successful Leader: Gender Roles in Greeting Card Messages
Jiao Sun
Tongshuang Wu
Yue Jiang
Ronil Awalegaonkar
Xi Lin
Diyi Yang
15
8
0
28 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
19
24
0
14 Dec 2021
Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic
  Information Preserving
Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Lei Ding
Dengdeng Yu
Jinhan Xie
Wenxing Guo
Shenggang Hu
Meichen Liu
Linglong Kong
Hongsheng Dai
Yanchun Bao
Bei Jiang
FaML
19
30
0
09 Dec 2021
Ethical and social risks of harm from Language Models
Ethical and social risks of harm from Language Models
Laura Weidinger
John F. J. Mellor
Maribeth Rauh
Conor Griffin
J. Uesato
...
Lisa Anne Hendricks
William S. Isaac
Sean Legassick
G. Irving
Iason Gabriel
PILM
35
985
0
08 Dec 2021
Evaluating Metrics for Bias in Word Embeddings
Evaluating Metrics for Bias in Word Embeddings
Sarah Schröder
Alexander Schulz
Philip Kenneweg
Robert Feldhans
Fabian Hinder
Barbara Hammer
21
10
0
15 Nov 2021
SynthBio: A Case Study in Human-AI Collaborative Curation of Text
  Datasets
SynthBio: A Case Study in Human-AI Collaborative Curation of Text Datasets
Ann Yuan
Daphne Ippolito
Vitaly Nikolaev
Chris Callison-Burch
Andy Coenen
Sebastian Gehrmann
SyDa
112
20
0
11 Nov 2021
A Word on Machine Ethics: A Response to Jiang et al. (2021)
A Word on Machine Ethics: A Response to Jiang et al. (2021)
Zeerak Talat
Hagen Blix
Josef Valvoda
M. I. Ganesh
Ryan Cotterell
Adina Williams
SyDa
FaML
96
38
0
07 Nov 2021
Feature and Label Embedding Spaces Matter in Addressing Image Classifier
  Bias
Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias
William Thong
Cees G. M. Snoek
25
14
0
27 Oct 2021
Fairness in Missing Data Imputation
Fairness in Missing Data Imputation
Yiliang Zhang
Q. Long
36
12
0
22 Oct 2021
Improving Gender Fairness of Pre-Trained Language Models without
  Catastrophic Forgetting
Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting
Zahra Fatemi
Chen Xing
Wenhao Liu
Caiming Xiong
CLL
29
33
0
11 Oct 2021
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias
Ting-Rui Chiang
35
4
0
11 Oct 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavas
55
120
0
08 Sep 2021
Hi, my name is Martha: Using names to measure and mitigate bias in
  generative dialogue models
Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models
Eric Michael Smith
Adina Williams
32
28
0
07 Sep 2021
Previous
12345
Next