ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.11714
  4. Cited By
Gender Bias in Neural Natural Language Processing

Gender Bias in Neural Natural Language Processing

31 July 2018
Kaiji Lu
Piotr (Peter) Mardziel
Fangjing Wu
Preetam Amancharla
Anupam Datta
ArXivPDFHTML

Papers citing "Gender Bias in Neural Natural Language Processing"

50 / 76 papers shown
Title
A Comparative Analysis of Ethical and Safety Gaps in LLMs using Relative Danger Coefficient
A Comparative Analysis of Ethical and Safety Gaps in LLMs using Relative Danger Coefficient
Yehor Tereshchenko
Mika Hämäläinen
ELM
51
1
0
06 May 2025
Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification
Bias Analysis and Mitigation through Protected Attribute Detection and Regard Classification
Takuma Udagawa
Yang Zhao
H. Kanayama
Bishwaranjan Bhattacharjee
33
0
0
19 Apr 2025
Towards Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models
Kefan Song
Jin Yao
Runnan Jiang
Rohan Chandra
Shangtong Zhang
ALM
46
0
0
10 Mar 2025
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Orfeas Menis Mastromichalakis
Giorgos Filandrianos
Maria Symeonaki
Giorgos Stamou
67
0
0
06 Mar 2025
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei
Naveen Kumar
Han Zhang
70
5
0
22 Jan 2025
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
  LLMs, Even for Vigilant Users
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users
Mengxuan Hu
Hongyi Wu
Zihan Guan
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Sheng Li
SILM
43
3
0
10 Oct 2024
Post-hoc Study of Climate Microtargeting on Social Media Ads with LLMs: Thematic Insights and Fairness Evaluation
Post-hoc Study of Climate Microtargeting on Social Media Ads with LLMs: Thematic Insights and Fairness Evaluation
Tunazzina Islam
Dan Goldwasser
41
1
0
07 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
48
0
0
06 Oct 2024
Towards Understanding Task-agnostic Debiasing Through the Lenses of
  Intrinsic Bias and Forgetfulness
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Guangliang Liu
Milad Afshari
Xitong Zhang
Zhiyu Xue
Avrajit Ghosh
Bidhan Bashyal
Rongrong Wang
K. Johnson
32
0
0
06 Jun 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
56
2
0
06 May 2024
Detecting Bias in Large Language Models: Fine-tuned KcBERT
Detecting Bias in Large Language Models: Fine-tuned KcBERT
J. K. Lee
T. M. Chung
34
0
0
16 Mar 2024
Measuring Bias in a Ranked List using Term-based Representations
Measuring Bias in a Ranked List using Term-based Representations
Amin Abolghasemi
Leif Azzopardi
Arian Askari
Maarten de Rijke
Suzan Verberne
42
6
0
09 Mar 2024
Identifying and Adapting Transformer-Components Responsible for Gender
  Bias in an English Language Model
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language Model
Abhijith Chintam
Rahel Beloch
Willem H. Zuidema
Michael Hanna
Oskar van der Wal
28
16
0
19 Oct 2023
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Christina Chance
Da Yin
Dakuo Wang
Kai-Wei Chang
34
0
0
16 Oct 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
54
60
0
20 Aug 2023
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
  Two-Stage Approach to Mitigate Social Biases
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Yingji Li
Mengnan Du
Xin Wang
Ying Wang
53
27
0
04 Jul 2023
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment
  Rating in a Realistic Downstream Classification Task
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task
Sophie F. Jentzsch
Cigdem Turan
31
31
0
27 Jun 2023
Long-form analogies generated by chatGPT lack human-like
  psycholinguistic properties
Long-form analogies generated by chatGPT lack human-like psycholinguistic properties
S. M. Seals
V. Shalin
24
11
0
07 Jun 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Yangqiu Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
42
2
0
23 May 2023
Should We Attend More or Less? Modulating Attention for Fairness
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Gonçalo Mordido
Samira Shabanian
Sarath Chandar
40
10
0
22 May 2023
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores
  Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource
  Languages
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Sourojit Ghosh
Aylin Caliskan
41
69
0
17 May 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence
  Reasoning
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
29
7
0
10 Mar 2023
Synthcity: facilitating innovative use cases of synthetic data in
  different data modalities
Synthcity: facilitating innovative use cases of synthetic data in different data modalities
Zhaozhi Qian
B. Cebere
M. Schaar
SyDa
38
57
0
18 Jan 2023
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
Ge Zhang
Yizhi Li
Yaoyao Wu
Linyuan Zhang
Chenghua Lin
Jiayi Geng
Shi Wang
Jie Fu
34
10
0
01 Jan 2023
Foundation models in brief: A historical, socio-technical focus
Foundation models in brief: A historical, socio-technical focus
Johannes Schneider
VLM
29
9
0
17 Dec 2022
Assessing the Impact of Sequence Length Learning on Classification Tasks
  for Transformer Encoder Models
Assessing the Impact of Sequence Length Learning on Classification Tasks for Transformer Encoder Models
Jean-Thomas Baillargeon
Luc Lamontagne
35
1
0
16 Dec 2022
Deep Causal Learning: Representation, Discovery and Inference
Deep Causal Learning: Representation, Discovery and Inference
Zizhen Deng
Xiaolong Zheng
Hu Tian
D. Zeng
CML
BDL
41
11
0
07 Nov 2022
The Shared Task on Gender Rewriting
The Shared Task on Gender Rewriting
Bashar Alhafni
Nizar Habash
Houda Bouamor
Ossama Obeid
Sultan Alrowili
...
Mohamed Gabr
Abderrahmane Issam
Abdelrahim Qaddoumi
K. Vijay-Shanker
Mahmoud Zyate
34
1
0
22 Oct 2022
AugCSE: Contrastive Sentence Embedding with Diverse Augmentations
AugCSE: Contrastive Sentence Embedding with Diverse Augmentations
Zilu Tang
Muhammed Yusuf Kocyigit
Derry Wijaya
37
9
0
20 Oct 2022
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Nizar Habash
29
2
0
14 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu Wang
Julian McAuley
Bodhisattwa Prasad Majumder
27
19
0
14 Oct 2022
Unified Detoxifying and Debiasing in Language Generation via
  Inference-time Adaptive Optimization
Unified Detoxifying and Debiasing in Language Generation via Inference-time Adaptive Optimization
Zonghan Yang
Xiaoyuan Yi
Peng Li
Yang Liu
Xing Xie
38
33
0
10 Oct 2022
FAST: Improving Controllability for Text Generation with Feedback Aware
  Self-Training
FAST: Improving Controllability for Text Generation with Feedback Aware Self-Training
Junyi Chai
Reid Pryzant
Victor Ye Dong
Konstantin Golobokov
Chenguang Zhu
Yi Liu
37
5
0
06 Oct 2022
The Birth of Bias: A case study on the evolution of gender bias in an
  English language model
The Birth of Bias: A case study on the evolution of gender bias in an English language model
Oskar van der Wal
Jaap Jumelet
K. Schulz
Willem H. Zuidema
32
16
0
21 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
26
8
0
10 Jul 2022
MVP: Multi-task Supervised Pre-training for Natural Language Generation
MVP: Multi-task Supervised Pre-training for Natural Language Generation
Tianyi Tang
Junyi Li
Wayne Xin Zhao
Ji-Rong Wen
51
24
0
24 Jun 2022
What Changed? Investigating Debiasing Methods using Causal Mediation
  Analysis
What Changed? Investigating Debiasing Methods using Causal Mediation Analysis
Su-Ha Jeoung
Jana Diesner
CML
27
7
0
01 Jun 2022
Using Natural Sentences for Understanding Biases in Language Models
Using Natural Sentences for Understanding Biases in Language Models
Sarah Alnegheimish
Alicia Guo
Yi Sun
27
21
0
12 May 2022
Synthetic Data -- what, why and how?
Synthetic Data -- what, why and how?
James Jordon
Lukasz Szpruch
F. Houssiau
M. Bottarelli
Giovanni Cherubin
Carsten Maple
Samuel N. Cohen
Adrian Weller
51
109
0
06 May 2022
Informativeness and Invariance: Two Perspectives on Spurious
  Correlations in Natural Language
Informativeness and Invariance: Two Perspectives on Spurious Correlations in Natural Language
Jacob Eisenstein
CML
35
25
0
09 Apr 2022
PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained
  Language Model
PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Fei Mi
Yitong Li
Yulong Zeng
Jingyan Zhou
Yasheng Wang
Chuanfei Xu
Lifeng Shang
Xin Jiang
Shiqi Zhao
Qun Liu
ALM
45
18
0
31 Mar 2022
Mitigating Gender Bias in Distilled Language Models via Counterfactual
  Role Reversal
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal
Umang Gupta
Jwala Dhamala
Varun Kumar
Apurv Verma
Yada Pruksachatkun
Satyapriya Krishna
Rahul Gupta
Kai-Wei Chang
Greg Ver Steeg
Aram Galstyan
21
49
0
23 Mar 2022
Feminist Perspective on Robot Learning Processes
Feminist Perspective on Robot Learning Processes
Juana Valeria Hurtado
Valentina Mejia
FaML
22
3
0
26 Jan 2022
Making a (Counterfactual) Difference One Rationale at a Time
Making a (Counterfactual) Difference One Rationale at a Time
Michael J. Plyler
Michal Green
Min Chi
26
11
0
13 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
30
111
0
28 Dec 2021
Sparse Interventions in Language Models with Differentiable Masking
Sparse Interventions in Language Models with Differentiable Masking
Nicola De Cao
Leon Schmid
Dieuwke Hupkes
Ivan Titov
40
27
0
13 Dec 2021
NL-Augmenter: A Framework for Task-Sensitive Natural Language
  Augmentation
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh D. Dhole
Varun Gangal
Sebastian Gehrmann
Aadesh Gupta
Zhenhao Li
...
Tianbao Xie
Usama Yaseen
Michael A. Yee
Jing Zhang
Yue Zhang
174
86
0
06 Dec 2021
Reason first, then respond: Modular Generation for Knowledge-infused
  Dialogue
Reason first, then respond: Modular Generation for Knowledge-infused Dialogue
Leonard Adolphs
Kurt Shuster
Jack Urbanek
Arthur Szlam
Jason Weston
KELM
LRM
212
41
0
09 Nov 2021
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative
  Networks
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks
A. Saha
Trent Kyono
J. Linmans
M. Schaar
CML
37
106
0
25 Oct 2021
Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Bingbing Li
Hongwu Peng
Rajat Sainju
Junhuan Yang
Lei Yang
Yueying Liang
Weiwen Jiang
Binghui Wang
Hang Liu
Caiwen Ding
32
12
0
15 Oct 2021
12
Next