ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.01496
  4. Cited By
Learning Gender-Neutral Word Embeddings

Learning Gender-Neutral Word Embeddings

29 August 2018
Jieyu Zhao
Yichao Zhou
Zeyu Li
Wei Wang
Kai-Wei Chang
    FaML
ArXivPDFHTML

Papers citing "Learning Gender-Neutral Word Embeddings"

50 / 75 papers shown
Title
Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models
Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models
Paloma Piot
Patricia Martín-Rodilla
Javier Parapar
50
0
0
04 May 2025
Gender Encoding Patterns in Pretrained Language Model Representations
Mahdi Zakizadeh
Mohammad Taher Pilehvar
56
0
0
09 Mar 2025
Social Science Is Necessary for Operationalizing Socially Responsible Foundation Models
Social Science Is Necessary for Operationalizing Socially Responsible Foundation Models
Adam Davies
Elisa Nguyen
Michael Simeone
Erik Johnston
Martin Gubri
95
0
0
20 Dec 2024
Towards Understanding Task-agnostic Debiasing Through the Lenses of
  Intrinsic Bias and Forgetfulness
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Guangliang Liu
Milad Afshari
Xitong Zhang
Zhiyu Xue
Avrajit Ghosh
Bidhan Bashyal
Rongrong Wang
K. Johnson
32
0
0
06 Jun 2024
MBIAS: Mitigating Bias in Large Language Models While Retaining Context
MBIAS: Mitigating Bias in Large Language Models While Retaining Context
Shaina Raza
Ananya Raval
Veronica Chatrath
56
6
0
18 May 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
56
2
0
06 May 2024
Evaluating Gender Bias in Large Language Models via Chain-of-Thought
  Prompting
Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
Timothy Baldwin
LRM
37
27
0
28 Jan 2024
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language
  Models
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Somayeh Ghanbarzadeh
Yan-ping Huang
Hamid Palangi
R. C. Moreno
Hamed Khanpour
44
12
0
20 Jul 2023
Learning to Generate Equitable Text in Dialogue from Biased Training
  Data
Learning to Generate Equitable Text in Dialogue from Biased Training Data
Anthony Sicilia
Malihe Alikhani
62
15
0
10 Jul 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Yangqiu Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
42
2
0
23 May 2023
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
Leonardo Ranaldi
Elena Sofia Ruzzetti
Davide Venditti
Dario Onorati
Fabio Massimo Zanzotto
45
35
0
23 May 2023
FineDeb: A Debiasing Framework for Language Models
FineDeb: A Debiasing Framework for Language Models
Akash Saravanan
Dhruv Mullick
Habibur Rahman
Nidhi Hegde
FedML
AI4CE
26
4
0
05 Feb 2023
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
Ge Zhang
Yizhi Li
Yaoyao Wu
Linyuan Zhang
Chenghua Lin
Jiayi Geng
Shi Wang
Jie Fu
38
10
0
01 Jan 2023
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
34
10
0
20 Dec 2022
Foundation models in brief: A historical, socio-technical focus
Foundation models in brief: A historical, socio-technical focus
Johannes Schneider
VLM
34
9
0
17 Dec 2022
Better Hit the Nail on the Head than Beat around the Bush: Removing
  Protected Attributes with a Single Projection
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
P. Haghighatkhah
Antske Fokkens
Pia Sommerauer
Bettina Speckmann
Kevin Verbeek
32
10
0
08 Dec 2022
A Survey on Preserving Fairness Guarantees in Changing Environments
A Survey on Preserving Fairness Guarantees in Changing Environments
Ainhize Barrainkua
Paula Gordaliza
Jose A. Lozano
Novi Quadrianto
FaML
31
3
0
14 Nov 2022
The Shared Task on Gender Rewriting
The Shared Task on Gender Rewriting
Bashar Alhafni
Nizar Habash
Houda Bouamor
Ossama Obeid
Sultan Alrowili
...
Mohamed Gabr
Abderrahmane Issam
Abdelrahim Qaddoumi
K. Vijay-Shanker
Mahmoud Zyate
34
1
0
22 Oct 2022
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Nizar Habash
31
2
0
14 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu Wang
Julian McAuley
Bodhisattwa Prasad Majumder
27
19
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
39
82
0
13 Oct 2022
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype
  Content Model
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype Content Model
Ali Omrani
Brendan Kennedy
M. Atari
Morteza Dehghani
29
1
0
11 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and
  their Social Biases in Downstream Tasks
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
30
41
0
06 Oct 2022
Detecting Political Biases of Named Entities and Hashtags on Twitter
Detecting Political Biases of Named Entities and Hashtags on Twitter
Zhiping Xiao
Jeffrey Zhu
Yining Wang
Pei Zhou
Wen Hong Lam
M. A. Porter
Yizhou Sun
41
18
0
16 Sep 2022
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem
  Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und
  Sprachtechnologie
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und Sprachtechnologie
Sabrina Burtscher
Katta Spiel
Lukas Daniel Klausner
Manuel Lardelli
Dagmar Gromann
21
7
0
06 Sep 2022
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
24
5
0
29 Aug 2022
BlenderBot 3: a deployed conversational agent that continually learns to
  responsibly engage
BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage
Kurt Shuster
Jing Xu
M. Komeili
Da Ju
Eric Michael Smith
...
Naman Goyal
Arthur Szlam
Y-Lan Boureau
Melanie Kambadur
Jason Weston
LM&Ro
KELM
37
235
0
05 Aug 2022
The Birth of Bias: A case study on the evolution of gender bias in an
  English language model
The Birth of Bias: A case study on the evolution of gender bias in an English language model
Oskar van der Wal
Jaap Jumelet
K. Schulz
Willem H. Zuidema
37
16
0
21 Jul 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
43
19
0
20 Apr 2022
Sense Embeddings are also Biased--Evaluating Social Biases in Static and
  Contextualised Sense Embeddings
Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings
Yi Zhou
Masahiro Kaneko
Danushka Bollegala
39
23
0
14 Mar 2022
Learning Fair Representations via Rate-Distortion Maximization
Learning Fair Representations via Rate-Distortion Maximization
Somnath Basu Roy Chowdhury
Snigdha Chaturvedi
FaML
10
14
0
31 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
32
111
0
28 Dec 2021
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
24
45
0
20 Oct 2021
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in
  NLP Systems through an Intersectional Lens
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens
Saad Hassan
Matt Huenerfauth
Cecilia Ovesdotter Alm
51
38
0
01 Oct 2021
Second Order WinoBias (SoWinoBias) Test Set for Latent Gender Bias
  Detection in Coreference Resolution
Second Order WinoBias (SoWinoBias) Test Set for Latent Gender Bias Detection in Coreference Resolution
Hillary Dawkins
14
0
0
28 Sep 2021
Assessing the Reliability of Word Embedding Gender Bias Measures
Assessing the Reliability of Word Embedding Gender Bias Measures
Yupei Du
Qixiang Fang
D. Nguyen
52
21
0
10 Sep 2021
Fair Representation: Guaranteeing Approximate Multiple Group Fairness
  for Unknown Tasks
Fair Representation: Guaranteeing Approximate Multiple Group Fairness for Unknown Tasks
Xudong Shen
Yongkang Wong
Mohan S. Kankanhalli
FaML
39
20
0
01 Sep 2021
Evaluating Gender Bias in Natural Language Inference
Evaluating Gender Bias in Natural Language Inference
Shanya Sharma
Manan Dey
Koustuv Sinha
28
41
0
12 May 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word
  Representations
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations
Archit Rathore
Sunipa Dev
J. M. Phillips
Vivek Srikumar
Yan Zheng
Chin-Chia Michael Yeh
Junpeng Wang
Wei Zhang
Bei Wang
43
10
0
06 Apr 2021
COVID-19 sentiment analysis via deep learning during the rise of novel
  cases
COVID-19 sentiment analysis via deep learning during the rise of novel cases
Rohitash Chandra
Aswin Krishna
35
97
0
05 Apr 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
22
181
0
08 Feb 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
38
38
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
218
137
0
23 Jan 2021
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal
  Clinical NLP
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP
John Chen
Ian Berlot-Attwell
Safwan Hossain
Xindi Wang
Frank Rudzicz
FaML
37
7
0
19 Nov 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
Investigating Gender Bias in BERT
Investigating Gender Bias in BERT
Rishabh Bhardwaj
Navonil Majumder
Soujanya Poria
33
106
0
10 Sep 2020
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
Yuhao Du
K. Joseph
19
3
0
20 Jun 2020
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective
  Learning
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning
Procheta Sen
Debasis Ganguly
27
18
0
14 May 2020
Deep Learning for Political Science
Deep Learning for Political Science
Kakia Chatsiou
Slava Jankin
AI4CE
39
12
0
13 May 2020
On the Relationships Between the Grammatical Genders of Inanimate Nouns
  and Their Co-Occurring Adjectives and Verbs
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
Adina Williams
Ryan Cotterell
Lawrence Wolf-Sonkin
Damián E. Blasi
Hanna M. Wallach
34
19
0
03 May 2020
12
Next