ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.12437
  4. Cited By
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
v1v2 (latest)

Towards Rigorous Interpretations: a Formalisation of Feature Attribution

26 April 2021
Darius Afchar
Romain Hennequin
Vincent Guigue
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Towards Rigorous Interpretations: a Formalisation of Feature Attribution"

25 / 25 papers shown
Title
Challenging common interpretability assumptions in feature attribution
  explanations
Challenging common interpretability assumptions in feature attribution explanations
Jonathan Dinu
Jeffrey P. Bigham
J. Z. K. Unaffiliated
56
14
0
04 Dec 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
114
33
0
06 Nov 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAIFAtt
89
26
0
04 Oct 2020
Making Neural Networks Interpretable with Attribution: Application to
  Implicit Signals Prediction
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction
Darius Afchar
Romain Hennequin
FAttXAI
67
16
0
26 Aug 2020
How does this interaction affect me? Interpretable attribution for
  feature interactions
How does this interaction affect me? Interpretable attribution for feature interactions
Michael Tsang
Sirisha Rambhatla
Yan Liu
FAtt
68
87
0
19 Jun 2020
Problems with Shapley-value-based explanations as feature importance
  measures
Problems with Shapley-value-based explanations as feature importance measures
Indra Elizabeth Kumar
Suresh Venkatasubramanian
C. Scheidegger
Sorelle A. Friedler
TDIFAtt
79
366
0
25 Feb 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDLFAttXAI
55
132
0
20 Dec 2019
Purifying Interaction Effects with the Functional ANOVA: An Efficient
  Algorithm for Recovering Identifiable Additive Models
Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Benjamin J. Lengerich
S. Tan
C. Chang
Giles Hooker
R. Caruana
48
42
0
12 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation
  Methods
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAttAAMLMLAU
77
819
0
06 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
121
6,269
0
22 Oct 2019
The many Shapley values for model explanation
The many Shapley values for model explanation
Mukund Sundararajan
A. Najmi
TDIFAtt
62
635
0
22 Aug 2019
TabNet: Attentive Interpretable Tabular Learning
TabNet: Attentive Interpretable Tabular Learning
Sercan O. Arik
Tomas Pfister
LMTD
188
1,355
0
20 Aug 2019
Improving performance of deep learning models with axiomatic attribution
  priors and expected gradients
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G. Erion
Joseph D. Janizek
Pascal Sturmfels
Scott M. Lundberg
Su-In Lee
OODBDLFAtt
61
81
0
25 Jun 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
81
334
0
19 Jun 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
141
1,970
0
08 Oct 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLTFAtt
146
575
0
21 Feb 2018
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAttXAI
101
687
0
02 Nov 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAttODL
204
2,235
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,002
0
22 May 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
188
6,015
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
405
3,809
0
28 Feb 2017
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
1