ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.10441
  4. Cited By
Evaluating Debiasing Techniques for Intersectional Biases

Evaluating Debiasing Techniques for Intersectional Biases

21 September 2021
Shivashankar Subramanian
Xudong Han
Timothy Baldwin
Trevor Cohn
Lea Frermann
ArXivPDFHTML

Papers citing "Evaluating Debiasing Techniques for Intersectional Biases"

28 / 28 papers shown
Title
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei
Naveen Kumar
Han Zhang
65
4
0
22 Jan 2025
Diversity Drives Fairness: Ensemble of Higher Order Mutants for
  Intersectional Fairness of Machine Learning Software
Diversity Drives Fairness: Ensemble of Higher Order Mutants for Intersectional Fairness of Machine Learning Software
Zhenpeng Chen
Xinyue Li
J. Zhang
Federica Sarro
Yang Liu
FaML
75
2
0
11 Dec 2024
Fairness Definitions in Language Models Explained
Fairness Definitions in Language Models Explained
Thang Viet Doan
Zhibo Chu
Zichong Wang
Wenbin Zhang
ALM
55
10
0
26 Jul 2024
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Hannah Chen
Yangfeng Ji
David E. Evans
26
2
0
30 Mar 2024
MIST: Mitigating Intersectional Bias with Disentangled Cross-Attention
  Editing in Text-to-Image Diffusion Models
MIST: Mitigating Intersectional Bias with Disentangled Cross-Attention Editing in Text-to-Image Diffusion Models
Hidir Yesiltepe
Kiymet Akdemir
Pinar Yanardag
29
3
0
28 Mar 2024
Protected group bias and stereotypes in Large Language Models
Protected group bias and stereotypes in Large Language Models
Hadas Kotek
David Q. Sun
Zidi Xiu
Margit Bowler
Christopher Klein
AILaw
ALM
33
3
0
21 Mar 2024
A Note on Bias to Complete
A Note on Bias to Complete
Jia Xu
Mona Diab
47
2
0
18 Feb 2024
DSAP: Analyzing Bias Through Demographic Comparison of Datasets
DSAP: Analyzing Bias Through Demographic Comparison of Datasets
Iris Dominguez-Catena
D. Paternain
M. Galar
35
4
0
22 Dec 2023
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
Yuxia Wang
Haonan Li
Xudong Han
Preslav Nakov
Timothy Baldwin
41
102
0
25 Aug 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Transferring Fairness using Multi-Task Learning with Limited Demographic
  Information
Transferring Fairness using Multi-Task Learning with Limited Demographic Information
Carlos Alejandro Aguirre
Mark Dredze
25
0
0
22 May 2023
Fair Without Leveling Down: A New Intersectional Fairness Definition
Fair Without Leveling Down: A New Intersectional Fairness Definition
Gaurav Maheshwari
A. Bellet
Pascal Denis
Mikaela Keller
FaML
29
2
0
21 May 2023
A Survey on Intersectional Fairness in Machine Learning: Notions,
  Mitigation, and Challenges
A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges
Usman Gohar
Lu Cheng
FaML
29
31
0
11 May 2023
iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or
  Modelling Perspectives?
iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?
Nikolas Vitsakis
Amit Parekh
Tanvi Dinkar
Gavin Abercrombie
Ioannis Konstas
Verena Rieser
49
10
0
10 May 2023
Fairness in Language Models Beyond English: Gaps and Challenges
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh
Sunayana Sitaram
Monojit Choudhury
32
23
0
24 Feb 2023
Fair Enough: Standardizing Evaluation and Model Selection for Fairness
  Research in NLP
Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP
Xudong Han
Timothy Baldwin
Trevor Cohn
28
12
0
11 Feb 2023
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark
  Datasets
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Tosin P. Adewumi
Isabella Sodergren
Lama Alkhaled
Sana Sabah Sabry
F. Liwicki
Marcus Liwicki
33
4
0
28 Jan 2023
Debiasing Methods for Fairer Neural Models in Vision and Language
  Research: A Survey
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
Otávio Parraga
Martin D. Móre
C. M. Oliveira
Nathan Gavenski
L. S. Kupssinskü
Adilson Medronha
L. V. Moura
Gabriel S. Simões
Rodrigo C. Barros
42
11
0
10 Nov 2022
Bridging Fairness and Environmental Sustainability in Natural Language
  Processing
Bridging Fairness and Environmental Sustainability in Natural Language Processing
Marius Hessenthaler
Emma Strubell
Dirk Hovy
Anne Lauscher
16
8
0
08 Nov 2022
MABEL: Attenuating Gender Bias using Textual Entailment Data
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He
Mengzhou Xia
C. Fellbaum
Danqi Chen
19
32
0
26 Oct 2022
Systematic Evaluation of Predictive Fairness
Systematic Evaluation of Predictive Fairness
Xudong Han
Aili Shen
Trevor Cohn
Timothy Baldwin
Lea Frermann
26
7
0
17 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu-Xiang Wang
Julian McAuley
Bodhisattwa Prasad Majumder
6
19
0
14 Oct 2022
Optimising Equal Opportunity Fairness in Model Training
Optimising Equal Opportunity Fairness in Model Training
Aili Shen
Xudong Han
Trevor Cohn
Timothy Baldwin
Lea Frermann
FaML
21
28
0
05 May 2022
Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on
  Toxicity Annotation
Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation
Nitesh Goyal
Ian D Kivlichan
Rachel Rosen
Lucy Vasserman
30
89
0
01 May 2022
Towards Equal Opportunity Fairness through Adversarial Learning
Towards Equal Opportunity Fairness through Adversarial Learning
Xudong Han
Timothy Baldwin
Trevor Cohn
FaML
12
8
0
12 Mar 2022
Learning Fair Representations via Rate-Distortion Maximization
Learning Fair Representations via Rate-Distortion Maximization
Somnath Basu Roy Chowdhury
Snigdha Chaturvedi
FaML
6
14
0
31 Jan 2022
Fairness-aware Class Imbalanced Learning
Fairness-aware Class Imbalanced Learning
Shivashankar Subramanian
Afshin Rahimi
Timothy Baldwin
Trevor Cohn
Lea Frermann
FaML
101
28
0
21 Sep 2021
Balancing out Bias: Achieving Fairness Through Balanced Training
Balancing out Bias: Achieving Fairness Through Balanced Training
Xudong Han
Timothy Baldwin
Trevor Cohn
24
39
0
16 Sep 2021
1