Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1907.04135
Cited By
The What-If Tool: Interactive Probing of Machine Learning Models
9 July 2019
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The What-If Tool: Interactive Probing of Machine Learning Models"
50 / 230 papers shown
Title
Towards Explainable Artificial Intelligence in Banking and Financial Services
Ambreen Hanif
22
9
0
14 Dec 2021
A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions
Brianna Richardson
J. Gilbert
FaML
21
35
0
10 Dec 2021
MCCE: Monte Carlo sampling of realistic counterfactual explanations
Annabelle Redelmeier
Martin Jullum
K. Aas
Anders Løland
BDL
29
11
0
18 Nov 2021
Interactive Analysis of CNN Robustness
Stefan Sietzen
Mathias Lechner
Judy Borowski
Ramin Hasani
Manuela Waldner
AAML
35
10
0
14 Oct 2021
A Sociotechnical View of Algorithmic Fairness
Mateusz Dolata
Stefan Feuerriegel
Gerhard Schwabe
FaML
32
92
0
27 Sep 2021
Augmenting Decision Making via Interactive What-If Analysis
Sneha Gathani
Madelon Hulsebos
James Gale
P. Haas
cCaugatay Demiralp
22
8
0
13 Sep 2021
AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
CML
HAI
24
21
0
12 Sep 2021
Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Michaela Hardt
Xiaoguang Chen
Xiaoyi Cheng
Michele Donini
J. Gelman
...
Muhammad Bilal Zafar
Sanjiv Ranjan Das
Kevin Haas
Tyler Hill
K. Kenthapadi
ELM
FaML
36
42
0
07 Sep 2021
Contrastive Identification of Covariate Shift in Image Data
Matthew Lyle Olson
Thu Nguyen
Gaurav Dixit
Neale Ratzlaff
Weng-Keen Wong
Minsuk Kahng
OOD
30
8
0
18 Aug 2021
VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models
Furui Cheng
Dongyu Liu
F. Du
Yanna Lin
Alexandra Zytek
Haomin Li
Huamin Qu
K. Veeramachaneni
24
37
0
04 Aug 2021
Improving Visualization Interpretation Using Counterfactuals
Smiti Kaul
D. Borland
Nan Cao
David Gotz
CML
10
17
0
21 Jul 2021
Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions
Eike Petersen
Yannik Potdevin
Esfandiar Mohammadi
Stephan Zidowitz
Sabrina Breyer
...
Sandra Henn
Ludwig Pechmann
M. Leucker
P. Rostalski
Christian Herzog
FaML
AILaw
OOD
32
21
0
20 Jul 2021
Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
Angie Boggust
Benjamin Hoover
Arvindmani Satyanarayan
Hendrik Strobelt
27
50
0
20 Jul 2021
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
Xingbo Wang
Jianben He
Zhihua Jin
Muqiao Yang
Yong Wang
Huamin Qu
13
75
0
17 Jul 2021
The Spotlight: A General Method for Discovering Systematic Errors in Deep Learning Models
G. dÉon
Jason dÉon
J. R. Wright
Kevin Leyton-Brown
25
74
0
01 Jul 2021
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
54
94
0
01 Jul 2021
Explanatory Pluralism in Explainable AI
Yiheng Yao
XAI
33
4
0
26 Jun 2021
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations
Abubakar Abid
Mert Yuksekgonul
James Zou
CML
34
64
0
24 Jun 2021
Rational Shapley Values
David S. Watson
23
20
0
18 Jun 2021
FairCanary: Rapid Continuous Explainable Fairness
Avijit Ghosh
Aalok Shanbhag
Christo Wilson
11
20
0
13 Jun 2021
FedNLP: An interpretable NLP System to Decode Federal Reserve Communications
Jean Lee
Hoyoul Luis Youn
Nicholas Stevens
Josiah Poon
S. Han
24
10
0
11 Jun 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
22
28
0
21 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural Networks
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
10
9
0
14 May 2021
When Fair Ranking Meets Uncertain Inference
Avijit Ghosh
Ritam Dutt
Christo Wilson
39
44
0
05 May 2021
TrustyAI Explainability Toolkit
Rob Geada
Tommaso Teofili
Rui Vieira
Rebecca Whitworth
Daniele Zonca
16
2
0
26 Apr 2021
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
Dieter Brughmans
Pieter Leyman
David Martens
35
63
0
15 Apr 2021
Does My Representation Capture X? Probe-Ably
Deborah Ferreira
Julia Rozanova
Mokanarangan Thayaparan
Marco Valentino
André Freitas
18
11
0
12 Apr 2021
Why? Why not? When? Visual Explanations of Agent Behavior in Reinforcement Learning
Aditi Mishra
Utkarsh Soni
Jinbin Huang
Chris Bryan
OffRL
22
23
0
06 Apr 2021
Coalitional strategies for efficient individual prediction explanation
Gabriel Ferrettini
Elodie Escriva
Julien Aligon
Jean-Baptiste Excoffier
C. Soulé-Dupuy
25
17
0
01 Apr 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
20
63
0
27 Mar 2021
Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems
Chloé Bakalar
Renata Barreto
Stevie Bergman
Miranda Bogen
Bobbie Chern
...
J. Simons
Jonathan Tannen
Edmund Tong
Kate Vredenburgh
Jiejing Zhao
FaML
11
26
0
10 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Bhavya Ghai
Md. Naimul Hoque
Klaus Mueller
29
26
0
05 Mar 2021
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms
Miguel Á. Carreira-Perpiñán
Suryabhan Singh Hada
CML
AAML
18
33
0
01 Mar 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
27
146
0
26 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
40
25
0
17 Feb 2021
RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization
Austin P. Wright
Omar Shaikh
Haekyu Park
Will Epperson
Muhammed Ahmed
Stephane Pinel
Duen Horng Chau
Diyi Yang
17
21
0
08 Feb 2021
BeFair: Addressing Fairness in the Banking Sector
Alessandro Castelnovo
Riccardo Crupi
Giulia Del Gamba
Greta Greco
A. Naseer
D. Regoli
Beatriz San Miguel González
FaML
25
16
0
03 Feb 2021
Evaluating the Interpretability of Generative Models by Interactive Reconstruction
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
105
49
0
02 Feb 2021
Soliciting Stakeholders' Fairness Notions in Child Maltreatment Predictive Systems
H. Cheng
Logan Stapleton
Ruiqi Wang
Paige E Bullock
Alexandra Chouldechova
Zhiwei Steven Wu
Haiyi Zhu
FaML
23
66
0
01 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making
Han Liu
Vivian Lai
Chenhao Tan
30
116
0
13 Jan 2021
Robustness Gym: Unifying the NLP Evaluation Landscape
Karan Goel
Nazneen Rajani
Jesse Vig
Samson Tan
Jason M. Wu
Stephan Zheng
Caiming Xiong
Joey Tianyi Zhou
Christopher Ré
AAML
OffRL
OOD
154
137
0
13 Jan 2021
GeCo: Quality Counterfactual Explanations in Real Time
Maximilian Schleich
Zixuan Geng
Yihong Zhang
D. Suciu
46
61
0
05 Jan 2021
Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making
Md. Naimul Hoque
Klaus Mueller
CML
54
30
0
03 Jan 2021
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python
Hubert Baniecki
Wojciech Kretowicz
Piotr Piątyszek
J. Wiśniewski
P. Biecek
FaML
26
95
0
28 Dec 2020
A Statistical Test for Probabilistic Fairness
Bahar Taşkesen
Jose H. Blanchet
Daniel Kuhn
Viet Anh Nguyen
FaML
14
38
0
09 Dec 2020
GNNLens: A Visual Analytics Approach for Prediction Error Diagnosis of Graph Neural Networks
Zhihua Jin
Yong Wang
Qianwen Wang
Yao Ming
Tengfei Ma
Huamin Qu
HAI
15
31
0
22 Nov 2020
TBSSvis: Visual Analytics for Temporal Blind Source Separation
Nikolaus Piccolotto
M. Bögl
T. Gschwandtner
C. Muehlmann
K. Nordhausen
Peter Filzmoser
Silvia Miksch
AI4TS
32
8
0
19 Nov 2020
HypperSteer: Hypothetical Steering and Data Perturbation in Sequence Prediction with Deep Learning
Chuan-Chi Wang
K. Ma
OOD
LLMSV
19
3
0
04 Nov 2020
Previous
1
2
3
4
5
Next