ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.06317
  4. Cited By
Towards Equal Opportunity Fairness through Adversarial Learning

Towards Equal Opportunity Fairness through Adversarial Learning

12 March 2022
Xudong Han
Timothy Baldwin
Trevor Cohn
    FaML
ArXivPDFHTML

Papers citing "Towards Equal Opportunity Fairness through Adversarial Learning"

8 / 8 papers shown
Title
A Brief History of Prompt: Leveraging Language Models. (Through Advanced
  Prompting)
A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting)
G. Muktadir
SILM
34
8
0
30 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
21
486
0
02 Sep 2023
Fair Enough: Standardizing Evaluation and Model Selection for Fairness
  Research in NLP
Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP
Xudong Han
Timothy Baldwin
Trevor Cohn
34
12
0
11 Feb 2023
Casual Conversations v2: Designing a large consent-driven dataset to
  measure algorithmic bias and robustness
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
C. Hazirbas
Yejin Bang
Tiezheng Yu
Parisa Assar
Bilal Porgali
...
Jacqueline Pan
Emily McReynolds
Miranda Bogen
Pascale Fung
Cristian Canton Ferrer
27
8
0
10 Nov 2022
Systematic Evaluation of Predictive Fairness
Systematic Evaluation of Predictive Fairness
Xudong Han
Aili Shen
Trevor Cohn
Timothy Baldwin
Lea Frermann
26
7
0
17 Oct 2022
fairlib: A Unified Framework for Assessing and Improving Classification
  Fairness
fairlib: A Unified Framework for Assessing and Improving Classification Fairness
Xudong Han
Aili Shen
Yitong Li
Lea Frermann
Timothy Baldwin
Trevor Cohn
VLM
FaML
23
12
0
04 May 2022
Evaluating Debiasing Techniques for Intersectional Biases
Evaluating Debiasing Techniques for Intersectional Biases
Shivashankar Subramanian
Xudong Han
Timothy Baldwin
Trevor Cohn
Lea Frermann
95
49
0
21 Sep 2021
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,084
0
24 Oct 2016
1