ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.05598
  4. Cited By
Learning how to explain neural networks: PatternNet and
  PatternAttribution

Learning how to explain neural networks: PatternNet and PatternAttribution

16 May 2017
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Learning how to explain neural networks: PatternNet and PatternAttribution"

31 / 81 papers shown
Title
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
40
120
0
26 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
46
82
0
17 Mar 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,119
0
22 Oct 2019
Towards Best Practice in Explaining Neural Network Decisions with LRP
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
22
148
0
22 Oct 2019
Decision Explanation and Feature Importance for Invertible Networks
Decision Explanation and Feature Importance for Invertible Networks
Juntang Zhuang
Nicha Dvornek
Xiaoxiao Li
Junlin Yang
James S. Duncan
AAML
FAtt
23
5
0
30 Sep 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
436
0
26 Sep 2019
Explaining Convolutional Neural Networks using Softmax Gradient
  Layer-wise Relevance Propagation
Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation
Brian Kenji Iwana
Ryohei Kuroki
S. Uchida
FAtt
32
94
0
06 Aug 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
29
378
0
03 Jul 2019
Unifying machine learning and quantum chemistry -- a deep neural network
  for molecular wavefunctions
Unifying machine learning and quantum chemistry -- a deep neural network for molecular wavefunctions
Kristof T. Schütt
M. Gastegger
A. Tkatchenko
K. Müller
R. Maurer
AI4CE
29
382
0
24 Jun 2019
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
20
82
0
31 May 2019
Explainability Techniques for Graph Convolutional Networks
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNN
FAtt
22
264
0
31 May 2019
Leveraging Latent Features for Local Explanations
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
49
37
0
29 May 2019
Explainable AI for Trees: From Local Explanations to Global
  Understanding
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
28
286
0
11 May 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
33
11
0
09 Apr 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation
  and Attribution Summarizations
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
19
213
0
04 Apr 2019
Explaining Deep Neural Networks with a Polynomial Time Algorithm for
  Shapley Values Approximation
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
Marco Ancona
Cengiz Öztireli
Markus Gross
FAtt
TDI
22
223
0
26 Mar 2019
Approximating CNNs with Bag-of-local-Features models works surprisingly
  well on ImageNet
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Wieland Brendel
Matthias Bethge
SSL
FAtt
31
557
0
20 Mar 2019
Explaining Neural Networks Semantically and Quantitatively
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
23
54
0
18 Dec 2018
Interactive Naming for Explaining Deep Neural Networks: A Formative
  Study
Interactive Naming for Explaining Deep Neural Networks: A Formative Study
M. Hamidi-Haines
Zhongang Qi
Alan Fern
Fuxin Li
Prasad Tadepalli
FAtt
HAI
14
11
0
18 Dec 2018
An Overview of Computational Approaches for Interpretation Analysis
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
21
2
0
09 Nov 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
37
77
0
09 Oct 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
19
128
0
08 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
64
1,931
0
08 Oct 2018
Quantum-chemical insights from interpretable atomistic neural networks
Quantum-chemical insights from interpretable atomistic neural networks
Kristof T. Schütt
M. Gastegger
A. Tkatchenko
K. Müller
AI4CE
33
31
0
27 Jun 2018
EEG-GAN: Generative adversarial networks for electroencephalograhic
  (EEG) brain signals
EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals
K. Hartmann
R. Schirrmeister
T. Ball
GAN
AI4TS
30
229
0
05 Jun 2018
Adaptive neural network classifier for decoding MEG signals
Adaptive neural network classifier for decoding MEG signals
I. Zubarev
Rasmus Zetter
Hanna-Leena Halme
L. Parkkonen
24
46
0
28 May 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
21
63
0
21 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
17
809
0
02 Feb 2018
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
45
678
0
02 Nov 2017
Deep Learning Techniques for Music Generation -- A Survey
Deep Learning Techniques for Music Generation -- A Survey
Jean-Pierre Briot
Gaëtan Hadjeres
F. Pachet
MGen
37
297
0
05 Sep 2017
Previous
12