ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.03310
  4. Cited By
Gender Bias in Contextualized Word Embeddings

Gender Bias in Contextualized Word Embeddings

5 April 2019
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
    FaML
ArXivPDFHTML

Papers citing "Gender Bias in Contextualized Word Embeddings"

50 / 241 papers shown
Title
Evaluating Bias and Fairness in Gender-Neutral Pretrained
  Vision-and-Language Models
Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
Laura Cabello
Emanuele Bugliarello
Stephanie Brandl
Desmond Elliott
23
7
0
26 Oct 2023
Investigating Bias in Multilingual Language Models: Cross-Lingual
  Transfer of Debiasing Techniques
Investigating Bias in Multilingual Language Models: Cross-Lingual Transfer of Debiasing Techniques
Manon Reusens
Philipp Borchert
Margot Mieskes
Jochen De Weerdt
Bart Baesens
37
8
0
16 Oct 2023
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Christina Chance
Da Yin
Dakuo Wang
Kai-Wei Chang
34
0
0
16 Oct 2023
Survey of Social Bias in Vision-Language Models
Survey of Social Bias in Vision-Language Models
Nayeon Lee
Yejin Bang
Holy Lovenia
Samuel Cahyawijaya
Wenliang Dai
Pascale Fung
VLM
47
16
0
24 Sep 2023
The Impact of Debiasing on the Performance of Language Models in
  Downstream Tasks is Underestimated
The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
61
5
0
16 Sep 2023
In-Contextual Gender Bias Suppression for Large Language Models
In-Contextual Gender Bias Suppression for Large Language Models
Daisuke Oba
Masahiro Kaneko
Danushka Bollegala
31
8
0
13 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
40
498
0
02 Sep 2023
Gender bias and stereotypes in Large Language Models
Gender bias and stereotypes in Large Language Models
Hadas Kotek
Rikker Dockum
David Q. Sun
44
207
0
28 Aug 2023
CMD: a framework for Context-aware Model self-Detoxification
CMD: a framework for Context-aware Model self-Detoxification
Zecheng Tang
Keyan Zhou
Juntao Li
Yuyang Ding
Pinzheng Wang
Bowen Yan
Minzhang
MU
23
5
0
16 Aug 2023
Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots
Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots
Bocheng Chen
Guangjing Wang
Hanqing Guo
Yuanda Wang
Qiben Yan
41
15
0
14 Jul 2023
Evaluating Biased Attitude Associations of Language Models in an
  Intersectional Context
Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Shiva Omrani Sabbaghi
Robert Wolfe
Aylin Caliskan
26
22
0
07 Jul 2023
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
  Two-Stage Approach to Mitigate Social Biases
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Yingji Li
Mengnan Du
Xin Wang
Ying Wang
53
27
0
04 Jul 2023
Gender Bias in Transformer Models: A comprehensive survey
Gender Bias in Transformer Models: A comprehensive survey
Praneeth Nemani
Yericherla Deepak Joel
Pallavi Vijay
Farhana Ferdousi Liza
24
3
0
18 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
44
21
0
13 Jun 2023
Measuring Sentiment Bias in Machine Translation
Measuring Sentiment Bias in Machine Translation
Kai Hartung
Aaricia Herygers
Shubham Kurlekar
Khabbab Zakaria
Taylan Volkan
Sören Gröttrup
Munir Georges
AI4CE
28
5
0
12 Jun 2023
Evaluating the Social Impact of Generative AI Systems in Systems and
  Society
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Irene Solaiman
Zeerak Talat
William Agnew
Lama Ahmad
Dylan K. Baker
...
Marie-Therese Png
Shubham Singh
A. Strait
Lukas Struppek
Arjun Subramonian
ELM
EGVM
41
104
0
09 Jun 2023
Are fairness metric scores enough to assess discrimination biases in
  machine learning?
Are fairness metric scores enough to assess discrimination biases in machine learning?
Fanny Jourdan
Laurent Risser
Jean-Michel Loubes
Nicholas M. Asher
FaML
16
5
0
08 Jun 2023
Language Models Get a Gender Makeover: Mitigating Gender Bias with
  Few-Shot Data Interventions
Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Himanshu Thakur
Atishay Jain
Praneetha Vaddamanu
Paul Pu Liang
Louis-Philippe Morency
36
31
0
07 Jun 2023
An Empirical Analysis of Parameter-Efficient Methods for Debiasing
  Pre-Trained Language Models
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models
Zhongbin Xie
Thomas Lukasiewicz
28
12
0
06 Jun 2023
An Invariant Learning Characterization of Controlled Text Generation
An Invariant Learning Characterization of Controlled Text Generation
Carolina Zheng
Claudia Shi
Keyon Vafa
Amir Feder
David M. Blei
OOD
38
8
0
31 May 2023
Nichelle and Nancy: The Influence of Demographic Attributes and
  Tokenization Length on First Name Biases
Nichelle and Nancy: The Influence of Demographic Attributes and Tokenization Length on First Name Biases
Haozhe An
Rachel Rudinger
24
9
0
26 May 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Yangqiu Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
37
2
0
23 May 2023
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
Leonardo Ranaldi
Elena Sofia Ruzzetti
Davide Venditti
Dario Onorati
Fabio Massimo Zanzotto
40
35
0
23 May 2023
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
Ameet Deshpande
Vishvak Murahari
Tanmay Rajpurohit
Ashwin Kalyan
Karthik R. Narasimhan
LM&MA
LLMAG
29
338
0
11 Apr 2023
Fundamentals of Generative Large Language Models and Perspectives in
  Cyber-Defense
Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense
Andrei Kucharavy
Z. Schillaci
Loic Maréchal
Maxime Wursch
Ljiljana Dolamic
Remi Sabonnadiere
Dimitri Percia David
Alain Mermoud
Vincent Lenders
ELM
AI4CE
35
31
0
21 Mar 2023
ChatGPT and a New Academic Reality: Artificial Intelligence-Written
  Research Papers and the Ethics of the Large Language Models in Scholarly
  Publishing
ChatGPT and a New Academic Reality: Artificial Intelligence-Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing
Brady Lund
Ting Wang
Nishith Reddy Mannuru
Bing Nie
S. Shimray
Ziang Wang
AI4CE
15
498
0
21 Mar 2023
Toward Fairness in Text Generation via Mutual Information Minimization
  based on Importance Sampling
Toward Fairness in Text Generation via Mutual Information Minimization based on Importance Sampling
Rui Wang
Pengyu Cheng
Ricardo Henao
20
12
0
25 Feb 2023
Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Deepak Kumar
Oleg Lesota
George Zerveas
Daniel Cohen
Carsten Eickhoff
Markus Schedl
Navid Rekabsaz
MoMe
KELM
28
25
0
13 Feb 2023
Debiasing Vision-Language Models via Biased Prompts
Debiasing Vision-Language Models via Biased Prompts
Ching-Yao Chuang
Varun Jampani
Yuanzhen Li
Antonio Torralba
Stefanie Jegelka
VLM
30
97
0
31 Jan 2023
Transformer-Patcher: One Mistake worth One Neuron
Transformer-Patcher: One Mistake worth One Neuron
Zeyu Huang
Songlin Yang
Xiaofeng Zhang
Jie Zhou
Wenge Rong
Zhang Xiong
KELM
42
160
0
24 Jan 2023
An Empirical Study of Metrics to Measure Representational Harms in
  Pre-Trained Language Models
An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Saghar Hosseini
Hamid Palangi
Ahmed Hassan Awadallah
39
22
0
22 Jan 2023
Blacks is to Anger as Whites is to Joy? Understanding Latent Affective
  Bias in Large Pre-trained Neural Language Models
Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models
Anoop Kadan
P Deepak
Sahely Bhadra
Manjary P.Gangan
L. LajishV.
19
2
0
21 Jan 2023
A Comprehensive Study of Gender Bias in Chemical Named Entity
  Recognition Models
A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models
Xingmeng Zhao
A. Niazi
Anthony Rios
31
2
0
24 Dec 2022
The effects of gender bias in word embeddings on depression prediction
The effects of gender bias in word embeddings on depression prediction
Gizem Sogancioglu
Heysem Kaya
26
3
0
15 Dec 2022
Unsupervised Detection of Contextualized Embedding Bias with Application
  to Ideology
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology
Valentin Hofmann
J. Pierrehumbert
Hinrich Schütze
36
0
0
14 Dec 2022
The Grind for Good Data: Understanding ML Practitioners' Struggles and
  Aspirations in Making Good Data
The Grind for Good Data: Understanding ML Practitioners' Struggles and Aspirations in Making Good Data
Inha Cha
Juhyun Oh
Cheul Young Park
Jiyoon Han
Hwalsuk Lee
29
2
0
28 Nov 2022
Undesirable Biases in NLP: Addressing Challenges of Measurement
Undesirable Biases in NLP: Addressing Challenges of Measurement
Oskar van der Wal
Dominik Bachmann
Alina Leidinger
L. Maanen
Willem H. Zuidema
K. Schulz
30
6
0
24 Nov 2022
Conceptor-Aided Debiasing of Large Language Models
Conceptor-Aided Debiasing of Large Language Models
Yifei Li
Lyle Ungar
João Sedoc
14
4
0
20 Nov 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
28
6
0
15 Nov 2022
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language
  Models
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language Models
Yizhi Li
Ge Zhang
Bohao Yang
Chenghua Lin
Shi Wang
Anton Ragni
Jie Fu
30
9
0
05 Nov 2022
MABEL: Attenuating Gender Bias using Textual Entailment Data
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He
Mengzhou Xia
C. Fellbaum
Danqi Chen
32
32
0
26 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
Log-linear Guardedness and its Implications
Log-linear Guardedness and its Implications
Shauli Ravfogel
Yoav Goldberg
Ryan Cotterell
28
2
0
18 Oct 2022
Social Biases in Automatic Evaluation Metrics for NLG
Social Biases in Automatic Evaluation Metrics for NLG
Mingqi Gao
Xiaojun Wan
30
3
0
17 Oct 2022
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for
  Text Generation
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Tianxiang Sun
Junliang He
Xipeng Qiu
Xuanjing Huang
24
44
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
30
25
0
13 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
29
82
0
13 Oct 2022
Toxicity in Multilingual Machine Translation at Scale
Toxicity in Multilingual Machine Translation at Scale
Marta R. Costa-jussá
Eric Michael Smith
C. Ropers
Daniel Licht
Jean Maillard
Javier Ferrando
Carlos Escolano
30
25
0
06 Oct 2022
What Do Children and Parents Want and Perceive in Conversational Agents?
  Towards Transparent, Trustworthy, Democratized Agents
What Do Children and Parents Want and Perceive in Conversational Agents? Towards Transparent, Trustworthy, Democratized Agents
Jessica Van Brummelen
M. Kelleher
Mi Tian
Nghi Hoang Nguyen
24
10
0
16 Sep 2022
Efficient Gender Debiasing of Pre-trained Indic Language Models
Efficient Gender Debiasing of Pre-trained Indic Language Models
Neeraja Kirtane
V. Manushree
Aditya Kane
19
3
0
08 Sep 2022
Previous
12345
Next