Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1909.03012
Cited By
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
6 September 2019
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
Samuel C. Hoffman
Stephanie Houde
Q. V. Liao
Ronny Luss
Aleksandra Mojsilović
Sami Mourad
Pablo Pedemonte
Ramya Raghavendra
John T. Richards
P. Sattigeri
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques"
30 / 30 papers shown
Title
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
170
1
0
13 Mar 2025
DepressionX: Knowledge Infused Residual Attention for Explainable Depression Severity Assessment
Yusif Ibrahimov
Tarique Anwar
Tommy Yuan
153
0
0
28 Jan 2025
XEQ Scale for Evaluating XAI Experience Quality
A. Wijekoon
Nirmalie Wiratunga
D. Corsar
Kyle Martin
Ikechukwu Nkisi-Orji
Belén Díaz-Agudo
Derek Bridge
117
2
0
20 Jan 2025
DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction
John Wu
David Wu
Jimeng Sun
297
0
0
16 Sep 2024
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Joakim Edin
Andreas Geert Motzfeldt
Casper L. Christensen
Tuukka Ruotsalo
Lars Maaløe
Maria Maistro
102
4
0
15 Aug 2024
CELL your Model: Contrastive Explanations for Large Language Models
Ronny Luss
Erik Miehling
Amit Dhurandhar
95
0
0
17 Jun 2024
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
239
90
0
18 Mar 2020
Generalized Linear Rule Models
Dennis L. Wei
S. Dash
Tian Gao
Oktay Gunluk
48
63
0
05 Jun 2019
TED: Teaching AI to Explain its Decisions
Michael Hind
Dennis L. Wei
Murray Campbell
Noel Codella
Amit Dhurandhar
Aleksandra Mojsilović
Karthikeyan N. Ramamurthy
Kush R. Varshney
53
110
0
12 Nov 2018
Improving Simple Models with Confidence Profiles
Amit Dhurandhar
Karthikeyan Shanmugam
Ronny Luss
Peder Olsen
51
46
0
19 Jul 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
122
940
0
20 Jun 2018
Boolean Decision Rules via Column Generation
S. Dash
Oktay Gunluk
Dennis L. Wei
63
174
0
24 May 2018
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
107
589
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
124
3,954
0
06 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
207
1,837
0
30 Nov 2017
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
Abhishek Kumar
P. Sattigeri
Avinash Balakrishnan
BDL
DRL
78
523
0
02 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
98
2,350
0
01 Nov 2017
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
64
780
0
02 Oct 2017
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
239
4,259
0
22 Jun 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
169
2,882
0
14 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
382
3,776
0
28 Feb 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
268
19,929
0
07 Oct 2016
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
108
812
0
13 Jun 2016
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
Xi Chen
Yan Duan
Rein Houthooft
John Schulman
Ilya Sutskever
Pieter Abbeel
GAN
157
4,232
0
12 Jun 2016
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLM
FAtt
81
618
0
28 Mar 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.1K
16,931
0
16 Feb 2016
Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks
Anh Totti Nguyen
J. Yosinski
Jeff Clune
54
329
0
11 Feb 2016
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
327
19,609
0
09 Mar 2015
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
160
2,117
0
21 Dec 2013
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
301
7,289
0
20 Dec 2013
1