Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00867
Cited By
The (Un)reliability of saliency methods
2 November 2017
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The (Un)reliability of saliency methods"
50 / 144 papers shown
Title
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
29
4
0
22 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
21
0
0
15 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Explainable Deep Learning to Profile Mitochondrial Disease Using High Dimensional Protein Expression Data
Atif Khan
C. Lawless
Amy Vincent
Satish Pilla
S. Ramesh
A. Mcgough
36
0
0
31 Oct 2022
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
47
39
0
24 Oct 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
23
1
0
05 Oct 2022
Ablation Path Saliency
Justus Sagemüller
Olivier Verdier
FAtt
AAML
11
0
0
26 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
31
1
0
19 Sep 2022
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
28
90
0
14 Sep 2022
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning
Xumeng Wang
Wei Chen
Jiazhi Xia
Zhen Wen
Rongchen Zhu
Tobias Schreck
FedML
26
20
0
16 Aug 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
26
62
0
29 Jul 2022
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical Evaluation
Oliver Mey
Deniz Neufeld
25
21
0
21 Jul 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
35
2
0
15 Jun 2022
Multi-Objective Hyperparameter Optimization in Machine Learning -- An Overview
Florian Karl
Tobias Pielok
Julia Moosbauer
Florian Pfisterer
Stefan Coors
...
Jakob Richter
Michel Lang
Eduardo C. Garrido-Merchán
Juergen Branke
B. Bischl
AI4CE
26
56
0
15 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
Comparing interpretation methods in mental state decoding analyses with deep learning models
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
13
2
0
31 May 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Maximum Entropy Baseline for Integrated Gradients
Hanxiao Tan
FAtt
16
4
0
12 Apr 2022
XAI in the context of Predictive Process Monitoring: Too much to Reveal
Ghada Elkhawaga
Mervat Abuelkheir
M. Reichert
17
1
0
16 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
Multi-Modal Knowledge Graph Construction and Application: A Survey
Xiangru Zhu
Zhixu Li
Xiaodan Wang
Xueyao Jiang
Penglei Sun
Xuwu Wang
Yanghua Xiao
N. Yuan
28
154
0
11 Feb 2022
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
26
73
0
07 Feb 2022
Visualizing Automatic Speech Recognition -- Means for a Better Understanding?
Karla Markert
Romain Parracone
Mykhailo Kulakov
Philip Sperl
Ching-yu Kao
Konstantin Böttinger
19
8
0
01 Feb 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
36
15
0
27 Jan 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
21
1
0
31 Dec 2021
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
Shailza Jolly
Pepa Atanasova
Isabelle Augenstein
36
13
0
13 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism
J. M. M. Torres
Sara E. Medina-DeVilliers
T. Clarkson
M. Lerner
Giuseppe Riccardi
30
34
0
25 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
38
23
0
09 Nov 2021
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
44
11
0
08 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
14
58
0
30 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
108
35
0
15 Oct 2021
Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction
Sayantan Kumar
Sean C. Yu
Thomas Kannampallil
Zachary B. Abrams
Andrew Michelson
Philip R. O. Payne
FAtt
11
7
0
09 Oct 2021
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
20
21
0
01 Oct 2021
Discriminative Attribution from Counterfactuals
N. Eckstein
A. S. Bates
G. Jefferis
Jan Funke
FAtt
CML
27
1
0
28 Sep 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
34
17
0
11 Aug 2021
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Lucie Charlotte Magister
Dmitry Kazhdan
Vikash Singh
Pietro Lió
32
48
0
25 Jul 2021
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Saeed Mian
22
54
0
20 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lió
Marco Gori
S. Melacci
FAtt
XAI
25
78
0
12 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
23
80
0
09 Jun 2021
Causal Abstractions of Neural Networks
Atticus Geiger
Hanson Lu
Thomas F. Icard
Christopher Potts
NAI
CML
17
218
0
06 Jun 2021
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Thorben Funke
Megha Khosla
Mandeep Rathee
Avishek Anand
FAtt
23
38
0
18 May 2021
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
38
17
0
13 May 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
31
20
0
26 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
S. Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
49
77
0
24 Apr 2021
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
16
33
0
18 Apr 2021
Previous
1
2
3
Next