Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1512.02479
Cited By
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
8 December 2015
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explaining NonLinear Classification Decisions with Deep Taylor Decomposition"
50 / 100 papers shown
Title
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
24
6
0
06 Jun 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
35
37
0
02 Jun 2022
Comparing interpretation methods in mental state decoding analyses with deep learning models
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
18
2
0
31 May 2022
Attention Flows for General Transformers
Niklas Metzger
Christopher Hahn
Julian Siber
Frederik Schmitt
Bernd Finkbeiner
34
0
0
30 May 2022
ViTOL: Vision Transformer for Weakly Supervised Object Localization
Saurav Gupta
Sourav Lakhotia
Abhay Rawat
Rahul Tallamraju
WSOL
32
21
0
14 Apr 2022
Maximum Entropy Baseline for Integrated Gradients
Hanxiao Tan
FAtt
18
4
0
12 Apr 2022
CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
S. Gadre
Mitchell Wortsman
Gabriel Ilharco
Ludwig Schmidt
Shuran Song
CLIP
LM&Ro
41
142
0
20 Mar 2022
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
29
73
0
07 Feb 2022
Learning-From-Disagreement: A Model Comparison and Visual Analytics Framework
Junpeng Wang
Liang Wang
Yan Zheng
Chin-Chia Michael Yeh
Shubham Jain
Wei Zhang
FAtt
30
12
0
19 Jan 2022
Forward Composition Propagation for Explainable Neural Reasoning
Isel Grau
Gonzalo Nápoles
M. Bello
Yamisleydi Salgueiro
A. Jastrzębska
22
0
0
23 Dec 2021
Gradient Frequency Modulation for Visually Explaining Video Understanding Models
Xinmiao Lin
Wentao Bao
Matthew Wright
Yu Kong
FAtt
AAML
27
2
0
01 Nov 2021
On Quantitative Evaluations of Counterfactuals
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
19
10
0
30 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
22
11
0
11 Oct 2021
Focus! Rating XAI Methods and Finding Biases
Anna Arias-Duart
Ferran Parés
Dario Garcia-Gasulla
Victor Gimenez-Abalos
26
32
0
28 Sep 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
24
76
0
23 Sep 2021
Self-learn to Explain Siamese Networks Robustly
Chao Chen
Yifan Shen
Guixiang Ma
Xiangnan Kong
S. Rangarajan
Xi Zhang
Sihong Xie
46
5
0
15 Sep 2021
AdjointNet: Constraining machine learning models with physics-based codes
S. Karra
B. Ahmmed
M. Mudunuru
AI4CE
PINN
OOD
16
4
0
08 Sep 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
29
64
0
24 Jun 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
49
77
0
24 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
22
6
0
19 Apr 2021
White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks
Meghna P. Ayyar
J. Benois-Pineau
A. Zemmari
FAtt
14
17
0
06 Apr 2021
Explaining Representation by Mutual Information
Li Gu
SSL
FAtt
32
0
0
28 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
35
25
0
20 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Explainable AI for ML jet taggers using expert variables and layerwise relevance propagation
G. Agarwal
L. Hay
I. Iashvili
Benjamin Mannix
C. McLean
Margaret E. Morris
S. Rappoccio
U. Schubert
43
18
0
26 Nov 2020
Quantifying Explainers of Graph Neural Networks in Computational Pathology
Guillaume Jaume
Pushpak Pati
Behzad Bozorgtabar
Antonio Foncubierta-Rodríguez
Florinda Feroce
A. Anniciello
T. Rau
Jean-Philippe Thiran
M. Gabrani
O. Goksel
FAtt
26
76
0
25 Nov 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
20
397
0
19 Oct 2020
Quantitative and Qualitative Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
Amitojdeep Singh
J. Balaji
M. Rasheed
Varadharajan Jayakumar
R. Raman
Vasudevan Lakshminarayanan
BDL
XAI
FAtt
9
29
0
26 Sep 2020
Adaptive Convolution Kernel for Artificial Neural Networks
F. B. Tek
Ilker Çam
D. Karli
14
12
0
14 Sep 2020
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
14
56
0
14 Aug 2020
Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation
Kazuya Nishimura
Junya Hayashida
Chenyang Wang
Dai Fei Elmer Ker
Ryoma Bise
26
17
0
30 Jul 2020
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
31
15
0
17 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
30
37
0
13 Jul 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
32
215
0
05 Jun 2020
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
51
14
0
05 Mar 2020
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
36
50
0
16 Dec 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
34
205
0
27 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,111
0
22 Oct 2019
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
22
148
0
22 Oct 2019
Software and application patterns for explanation methods
Maximilian Alber
33
11
0
09 Apr 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
17
996
0
26 Feb 2019
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
21
2
0
09 Nov 2018
Interpretable Convolutional Neural Networks via Feedforward Design
C.-C. Jay Kuo
Min Zhang
Siyang Li
Jiali Duan
Yueru Chen
33
155
0
05 Oct 2018
Explaining the Unique Nature of Individual Gait Patterns with Deep Learning
Fabian Horst
Sebastian Lapuschkin
Wojciech Samek
K. Müller
W. Schöllhorn
AI4CE
28
207
0
13 Aug 2018
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
19
82
0
19 Jun 2018
Understanding Patch-Based Learning by Explaining Predictions
Christopher J. Anders
G. Montavon
Wojciech Samek
K. Müller
UQCV
FAtt
30
6
0
11 Jun 2018
Towards computational fluorescence microscopy: Machine learning-based integrated prediction of morphological and molecular tumor profiles
Alexander Binder
M. Bockmayr
Miriam Hagele
S. Wienert
Daniel Heim
...
M. Dietel
A. Hocke
C. Denkert
K. Müller
Frederick Klauschen
AI4CE
8
27
0
28 May 2018
Investigating the influence of noise and distractors on the interpretation of neural networks
Pieter-Jan Kindermans
Kristof T. Schütt
K. Müller
Sven Dähne
FAtt
19
125
0
22 Nov 2016
Previous
1
2