Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.12437
Cited By
v1
v2 (latest)
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
26 April 2021
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Rigorous Interpretations: a Formalisation of Feature Attribution"
25 / 25 papers shown
Title
Challenging common interpretability assumptions in feature attribution explanations
Jonathan Dinu
Jeffrey P. Bigham
J. Z. K. Unaffiliated
56
14
0
04 Dec 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
114
33
0
06 Nov 2020
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
89
26
0
04 Oct 2020
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction
Darius Afchar
Romain Hennequin
FAtt
XAI
67
16
0
26 Aug 2020
How does this interaction affect me? Interpretable attribution for feature interactions
Michael Tsang
Sirisha Rambhatla
Yan Liu
FAtt
68
87
0
19 Jun 2020
Problems with Shapley-value-based explanations as feature importance measures
Indra Elizabeth Kumar
Suresh Venkatasubramanian
C. Scheidegger
Sorelle A. Friedler
TDI
FAtt
79
366
0
25 Feb 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
55
132
0
20 Dec 2019
Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Benjamin J. Lengerich
S. Tan
C. Chang
Giles Hooker
R. Caruana
48
42
0
12 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
77
819
0
06 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
121
6,269
0
22 Oct 2019
The many Shapley values for model explanation
Mukund Sundararajan
A. Najmi
TDI
FAtt
62
635
0
22 Aug 2019
TabNet: Attentive Interpretable Tabular Learning
Sercan O. Arik
Tomas Pfister
LMTD
188
1,355
0
20 Aug 2019
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G. Erion
Joseph D. Janizek
Pascal Sturmfels
Scott M. Lundberg
Su-In Lee
OOD
BDL
FAtt
61
81
0
25 Jun 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
334
0
19 Jun 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
141
1,970
0
08 Oct 2018
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLT
FAtt
146
575
0
21 Feb 2018
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
101
687
0
02 Nov 2017
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
204
2,235
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,002
0
22 May 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
6,015
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
405
3,809
0
28 Feb 2017
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,027
0
16 Feb 2016
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
1