ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.09951
  4. Cited By
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting
  Data Scientists in Training Fair Models

Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models

17 December 2020
Brittany Johnson
Jesse Bartola
Rico Angell
Katherine Keith
Sam Witty
S. Giguere
Yuriy Brun
    FaML
ArXivPDFHTML

Papers citing "Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models"

8 / 8 papers shown
Title
Generative AI Voting: Fair Collective Choice is Resilient to LLM Biases and Inconsistencies
Generative AI Voting: Fair Collective Choice is Resilient to LLM Biases and Inconsistencies
Srijoni Majumdar
Edith Elkind
Evangelos Pournaras
SyDa
49
1
0
31 May 2024
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and
  Perceived Bias in Machine Learning
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
Aimen Gaba
Zhanna Kaufman
Jason Chueng
Marie Shvakel
Kyle Wm. Hall
Yuriy Brun
Cindy Xiong Bearfield
27
14
0
07 Aug 2023
Fair Enough: Searching for Sufficient Measures of Fairness
Fair Enough: Searching for Sufficient Measures of Fairness
Suvodeep Majumder
Joymallya Chakraborty
Gina R. Bai
Kathryn T. Stolee
Tim Menzies
16
26
0
25 Oct 2021
Blindspots in Python and Java APIs Result in Vulnerable Code
Blindspots in Python and Java APIs Result in Vulnerable Code
Yuriy Brun
Tian Lin
J. Somerville
Elisha M Myers
Natalie C. Ebner
19
7
0
10 Mar 2021
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
24
27
0
06 Oct 2020
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,082
0
24 Oct 2016
1