ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.00463
  4. Cited By
Addressing Both Statistical and Causal Gender Fairness in NLP Models

Addressing Both Statistical and Causal Gender Fairness in NLP Models

30 March 2024
Hannah Chen
Yangfeng Ji
David E. Evans
ArXivPDFHTML

Papers citing "Addressing Both Statistical and Causal Gender Fairness in NLP Models"

2 / 2 papers shown
Title
Evaluating Debiasing Techniques for Intersectional Biases
Evaluating Debiasing Techniques for Intersectional Biases
Shivashankar Subramanian
Xudong Han
Timothy Baldwin
Trevor Cohn
Lea Frermann
110
49
0
21 Sep 2021
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
1