ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00614
  4. Cited By
Multi-Dimensional Gender Bias Classification

Multi-Dimensional Gender Bias Classification

1 May 2020
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
    FaML
ArXivPDFHTML

Papers citing "Multi-Dimensional Gender Bias Classification"

22 / 22 papers shown
Title
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Orfeas Menis Mastromichalakis
Giorgos Filandrianos
Maria Symeonaki
Giorgos Stamou
62
0
0
06 Mar 2025
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
Christopher Burger
Charles Walter
Thai Le
AAML
148
1
0
20 Jan 2025
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes
  in Emotion Attribution
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
Flor Miriam Plaza del Arco
Amanda Cercas Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
34
24
0
05 Mar 2024
Zero-Shot Robustification of Zero-Shot Models
Zero-Shot Robustification of Zero-Shot Models
Dyah Adila
Changho Shin
Lin Cai
Frederic Sala
40
18
0
08 Sep 2023
Uncovering and Quantifying Social Biases in Code Generation
Uncovering and Quantifying Social Biases in Code Generation
Yong-Jin Liu
Xiaokang Chen
Yan Gao
Zhe Su
Fengji Zhang
Daoguang Zan
Jian-Guang Lou
Pin-Yu Chen
Tsung-Yi Ho
36
19
0
24 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
37
9
0
22 May 2023
Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
Katie Seaborn
Shruti Chandra
Thibault Fabre
23
11
0
22 Apr 2023
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
22
6
0
15 Nov 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
79
130
0
18 May 2022
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
19
25
0
14 Dec 2021
CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning
CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning
Teyun Kwon
Anandha Gopalan
25
2
0
01 Dec 2021
Investigating Cross-Linguistic Gender Bias in Hindi-English Across
  Domains
Investigating Cross-Linguistic Gender Bias in Hindi-English Across Domains
S N Khosla
18
0
0
22 Nov 2021
A Word on Machine Ethics: A Response to Jiang et al. (2021)
A Word on Machine Ethics: A Response to Jiang et al. (2021)
Zeerak Talat
Hagen Blix
Josef Valvoda
M. I. Ganesh
Ryan Cotterell
Adina Williams
SyDa
FaML
96
38
0
07 Nov 2021
An Empirical Survey of the Effectiveness of Debiasing Techniques for
  Pre-trained Language Models
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Nicholas Meade
Elinor Poole-Dayan
Siva Reddy
22
123
0
16 Oct 2021
Automatically Exposing Problems with Neural Dialog Models
Automatically Exposing Problems with Neural Dialog Models
Dian Yu
Kenji Sagae
31
9
0
14 Sep 2021
Anticipating Safety Issues in E2E Conversational AI: Framework and
  Tooling
Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling
Emily Dinan
Gavin Abercrombie
A. S. Bergman
Shannon L. Spruit
Dirk Hovy
Y-Lan Boureau
Verena Rieser
43
105
0
07 Jul 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
45
178
0
07 Jun 2021
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
20
645
0
30 Sep 2020
Mitigating Gender Bias for Neural Dialogue Generation with Adversarial
  Learning
Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning
Haochen Liu
Wentao Wang
Yiqi Wang
Hui Liu
Zitao Liu
Jiliang Tang
19
71
0
28 Sep 2020
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
  Models
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman
Suchin Gururangan
Maarten Sap
Yejin Choi
Noah A. Smith
34
1,130
0
24 Sep 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
27
205
0
10 Nov 2019
1