ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.05419
  4. Cited By
FairVis: Visual Analytics for Discovering Intersectional Bias in Machine
  Learning

FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning

10 April 2019
Ángel Alexander Cabrera
Will Epperson
Fred Hohman
Minsuk Kahng
Jamie Morgenstern
Duen Horng Chau
    FaML
ArXivPDFHTML

Papers citing "FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning"

39 / 39 papers shown
Title
Classifier-to-Bias: Toward Unsupervised Automatic Bias Detection for Visual Classifiers
Classifier-to-Bias: Toward Unsupervised Automatic Bias Detection for Visual Classifiers
Quentin Guimard
Moreno DÍncà
Massimiliano Mancini
Elisa Ricci
SSL
72
0
0
29 Apr 2025
Interpretable and Fair Mechanisms for Abstaining Classifiers
Interpretable and Fair Mechanisms for Abstaining Classifiers
Daphne Lenders
Andrea Pugnana
Roberto Pellungrini
Toon Calders
D. Pedreschi
F. Giannotti
FaML
91
1
0
24 Mar 2025
Misty: UI Prototyping Through Interactive Conceptual Blending
Misty: UI Prototyping Through Interactive Conceptual Blending
Yuwen Lu
Alan Leung
Amanda Swearngin
Jeffrey Nichols
Titus Barik
26
3
0
20 Sep 2024
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
Lin Luo
Yuri Nakao
Mathieu Chollet
Hiroya Inakoshi
Simone Stumpf
38
0
0
16 Jul 2024
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and
  Perceived Bias in Machine Learning
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
Aimen Gaba
Zhanna Kaufman
Jason Chueng
Marie Shvakel
Kyle Wm. Hall
Yuriy Brun
Cindy Xiong Bearfield
27
14
0
07 Aug 2023
Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query
  Sculpting
Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query Sculpting
Md. Naimul Hoque
Niklas Elmqvist
29
5
0
05 Aug 2023
LINGO : Visually Debiasing Natural Language Instructions to Support Task
  Diversity
LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity
Anjana Arunkumar
Shubham Sharma
Rakhi Agrawal
Sriramakrishnan Chandrasekaran
Chris Bryan
34
0
0
12 Apr 2023
Angler: Helping Machine Translation Practitioners Prioritize Model
  Improvements
Angler: Helping Machine Translation Practitioners Prioritize Model Improvements
Samantha Robertson
Zijie J. Wang
Dominik Moritz
Mary Beth Kery
Fred Hohman
35
15
0
12 Apr 2023
Improving Human-AI Collaboration With Descriptions of AI Behavior
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
35
34
0
06 Jan 2023
Detection of Groups with Biased Representation in Ranking
Detection of Groups with Biased Representation in Ranking
Jinyang Li
Y. Moskovitch
H. V. Jagadish
19
8
0
30 Dec 2022
The State of the Art in Enhancing Trust in Machine Learning Models with
  the Use of Visualizations
The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations
Angelos Chatzimparmpas
R. Martins
I. Jusufi
K. Kucher
Fabrice Rossi
A. Kerren
FAtt
26
160
0
22 Dec 2022
Manifestations of Xenophobia in AI Systems
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
BiaScope: Visual Unfairness Diagnosis for Graph Embeddings
BiaScope: Visual Unfairness Diagnosis for Graph Embeddings
Agapi Rissaki
Bruno Scarone
David Liu
Aditeya Pandey
Brennan Klein
Tina Eliassi-Rad
M. Borkin
FaML
21
6
0
12 Oct 2022
Variable-Based Calibration for Machine Learning Classifiers
Variable-Based Calibration for Machine Learning Classifiers
Mark Kelly
Padhraic Smyth
24
4
0
30 Sep 2022
A Visual Analytics System for Improving Attention-based Traffic
  Forecasting Models
A Visual Analytics System for Improving Attention-based Traffic Forecasting Models
Seungmin Jin
Hyunwoo Lee
Cheonbok Park
Hyeshin Chu
Yunwon Tae
Jaegul Choo
Sungahn Ko
20
14
0
08 Aug 2022
Visual Auditor: Interactive Visualization for Detection and
  Summarization of Model Biases
Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
David Munechika
Zijie J. Wang
Jack Reidy
Josh Rubin
Krishna Gade
K. Kenthapadi
Duen Horng Chau
MLAU
26
18
0
25 Jun 2022
When Personalization Harms: Reconsidering the Use of Group Attributes in
  Prediction
When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction
Vinith M. Suriyakumar
Marzyeh Ghassemi
Berk Ustun
35
6
0
04 Jun 2022
Towards Responsible AI: A Design Space Exploration of Human-Centered
  Artificial Intelligence User Interfaces to Investigate Fairness
Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness
Yuri Nakao
Lorenzo Strappelli
Simone Stumpf
A. Naseer
D. Regoli
Giulia Del Gamba
12
29
0
01 Jun 2022
DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine
  Learning with Treemaps
DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine Learning with Treemaps
Donald Bertucci
M. Hamid
Yashwanthi Anand
Anita Ruangrotsakun
Delyar Tabatabai
Melissa Perez
Minsuk Kahng
43
29
0
14 May 2022
De-biasing "bias" measurement
De-biasing "bias" measurement
K. Lum
Yunfeng Zhang
Amanda Bower
15
27
0
11 May 2022
Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
Yuri Nakao
Simone Stumpf
Subeida Ahmed
A. Naseer
Lorenzo Strappelli
21
34
0
22 Apr 2022
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee
Zijie J. Wang
Judy Hoffman
Duen Horng Chau
24
11
0
12 Apr 2022
iSEA: An Interactive Pipeline for Semantic Error Analysis of NLP Models
iSEA: An Interactive Pipeline for Semantic Error Analysis of NLP Models
Jun Yuan
Jesse Vig
Nazneen Rajani
14
13
0
08 Mar 2022
Aligning Eyes between Humans and Deep Neural Network through Interactive
  Attention Alignment
Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment
Yuyang Gao
Tong Sun
Liang Zhao
Sungsoo Ray Hong
HAI
23
37
0
06 Feb 2022
LMdiff: A Visual Diff Tool to Compare Language Models
LMdiff: A Visual Diff Tool to Compare Language Models
Hendrik Strobelt
Benjamin Hoover
Arvind Satyanarayan
Sebastian Gehrmann
VLM
29
19
0
02 Nov 2021
The Spotlight: A General Method for Discovering Systematic Errors in
  Deep Learning Models
The Spotlight: A General Method for Discovering Systematic Errors in Deep Learning Models
G. dÉon
Jason dÉon
J. R. Wright
Kevin Leyton-Brown
25
74
0
01 Jul 2021
Productivity, Portability, Performance: Data-Centric Python
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
54
94
0
01 Jul 2021
WordBias: An Interactive Visual Tool for Discovering Intersectional
  Biases Encoded in Word Embeddings
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Bhavya Ghai
Md. Naimul Hoque
Klaus Mueller
29
26
0
05 Mar 2021
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting
  Data Scientists in Training Fair Models
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models
Brittany Johnson
Jesse Bartola
Rico Angell
Katherine Keith
Sam Witty
S. Giguere
Yuriy Brun
FaML
22
18
0
17 Dec 2020
One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification
One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification
Kenji Kobayashi
Yuri Nakao
FaML
30
8
0
26 Oct 2020
Value Cards: An Educational Toolkit for Teaching Social Impacts of
  Machine Learning through Deliberation
Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation
Hong Shen
Wesley Hanwen Deng
Aditi Chattopadhyay
Zhiwei Steven Wu
Xu Wang
Haiyi Zhu
19
63
0
22 Oct 2020
mage: Fluid Moves Between Code and Graphical Work in Computational
  Notebooks
mage: Fluid Moves Between Code and Graphical Work in Computational Notebooks
Mary Beth Kery
Donghao Ren
Fred Hohman
Dominik Moritz
Kanit Wongsuphasawat
Kayur Patel
11
82
0
22 Sep 2020
Competing Models: Inferring Exploration Patterns and Information
  Relevance via Bayesian Model Selection
Competing Models: Inferring Exploration Patterns and Information Relevance via Bayesian Model Selection
S. Monadjemi
Roman Garnett
Alvitta Ottley
19
18
0
13 Sep 2020
A Survey of Visual Analytics Techniques for Machine Learning
A Survey of Visual Analytics Techniques for Machine Learning
Jun Yuan
Changjian Chen
Weikai Yang
Mengchen Liu
Jiazhi Xia
Shixia Liu
21
216
0
21 Aug 2020
Designing Tools for Semi-Automated Detection of Machine Learning Biases:
  An Interview Study
Designing Tools for Semi-Automated Detection of Machine Learning Biases: An Interview Study
Po-Ming Law
Sana Malik
F. Du
Moumita Sinha
21
12
0
13 Mar 2020
Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Yunfeng Zhang
Rachel K. E. Bellamy
Kush R. Varshney
14
38
0
05 Feb 2020
Auditing and Achieving Intersectional Fairness in Classification
  Problems
Auditing and Achieving Intersectional Fairness in Classification Problems
Giulio Morina
V. Oliinyk
J. Waton
Ines Marusic
K. Georgatzis
FaML
19
39
0
04 Nov 2019
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,087
0
24 Oct 2016
1