ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.05598
  4. Cited By
Learning how to explain neural networks: PatternNet and
  PatternAttribution

Learning how to explain neural networks: PatternNet and PatternAttribution

16 May 2017
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Learning how to explain neural networks: PatternNet and PatternAttribution"

50 / 73 papers shown
Title
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
Implementing local-explainability in Gradient Boosting Trees: Feature
  Contribution
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Ángel Delgado-Panadero
Beatriz Hernández-Lorca
María Teresa García-Ordás
J. Benítez-Andrades
40
52
0
14 Feb 2024
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
29
0
0
20 Nov 2023
PAMI: partition input and aggregate outputs for model interpretation
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Wentao Zhang
Weishi Zheng
Ruixuan Wang
FAtt
26
3
0
07 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
21
2
0
04 Feb 2023
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
29
68
0
25 Dec 2022
Data-Adaptive Discriminative Feature Localization with Statistically
  Guaranteed Interpretation
Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation
Ben Dai
Xiaotong Shen
Lingzhi Chen
Chunlin Li
Wei Pan
FAtt
21
1
0
18 Nov 2022
Explainable Deep Learning to Profile Mitochondrial Disease Using High
  Dimensional Protein Expression Data
Explainable Deep Learning to Profile Mitochondrial Disease Using High Dimensional Protein Expression Data
Atif Khan
C. Lawless
Amy Vincent
Satish Pilla
S. Ramesh
A. Mcgough
36
0
0
31 Oct 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
30
131
0
07 Jun 2022
Visualizing Deep Neural Networks with Topographic Activation Maps
Visualizing Deep Neural Networks with Topographic Activation Maps
A. Krug
Raihan Kabir Ratul
Christopher Olson
Sebastian Stober
FAtt
AI4CE
36
3
0
07 Apr 2022
Investigating the fidelity of explainable artificial intelligence
  methods for applications of convolutional neural networks in geoscience
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
26
73
0
07 Feb 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
60
7
0
07 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
53
15
0
23 Jan 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN
  Interpretability
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
21
1
0
31 Dec 2021
Evaluating saliency methods on artificial data with different background
  types
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
27
5
0
09 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Evaluation of Interpretability for Deep Learning algorithms in EEG
  Emotion Recognition: A case study in Autism
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism
J. M. M. Torres
Sara E. Medina-DeVilliers
T. Clarkson
M. Lerner
Giuseppe Riccardi
30
34
0
25 Nov 2021
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
Mingjie Li
Shaobo Wang
Quanshi Zhang
27
11
0
05 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
18
302
0
01 Nov 2021
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
Discriminative Attribution from Counterfactuals
Discriminative Attribution from Counterfactuals
N. Eckstein
A. S. Bates
G. Jefferis
Jan Funke
FAtt
CML
27
1
0
28 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
38
16
0
20 Sep 2021
This looks more like that: Enhancing Self-Explaining Models by
  Prototypical Relevance Propagation
This looks more like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation
Srishti Gautam
Marina M.-C. Höhne
Stine Hansen
Robert Jenssen
Michael C. Kampffmeyer
27
49
0
27 Aug 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
  Features in Images
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
37
3
0
25 Aug 2021
Explaining Bayesian Neural Networks
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
34
25
0
23 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
34
17
0
11 Aug 2021
Improved Feature Importance Computations for Tree Models: Shapley vs.
  Banzhaf
Improved Feature Importance Computations for Tree Models: Shapley vs. Banzhaf
Adam Karczmarz
A. Mukherjee
Piotr Sankowski
Piotr Wygocki
FAtt
TDI
14
6
0
09 Aug 2021
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
29
64
0
24 Jun 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
184
0
15 May 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
S. Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
49
77
0
24 Apr 2021
Explainable Adversarial Attacks in Deep Neural Networks Using Activation
  Profiles
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
24
9
0
18 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
21
63
0
18 Dec 2020
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
39
38
0
03 Dec 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
36
241
0
21 Nov 2020
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based
  Approach
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
22
9
0
29 Sep 2020
A Systematic Literature Review on the Use of Deep Learning in Software
  Engineering Research
A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research
Cody Watson
Nathan Cooper
David Nader-Palacio
Kevin Moran
Denys Poshyvanyk
26
111
0
14 Sep 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
11
56
0
14 Aug 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
16
71
0
03 Aug 2020
Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation
Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation
Kazuya Nishimura
Junya Hayashida
Chenyang Wang
Dai Fei Elmer Ker
Ryoma Bise
26
17
0
30 Jul 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
13
18
0
29 Jun 2020
Embedded Encoder-Decoder in Convolutional Networks Towards Explainable
  AI
Embedded Encoder-Decoder in Convolutional Networks Towards Explainable AI
A. Tavanaei
XAI
12
31
0
19 Jun 2020
Human-Expert-Level Brain Tumor Detection Using Deep Learning with Data
  Distillation and Augmentation
Human-Expert-Level Brain Tumor Detection Using Deep Learning with Data Distillation and Augmentation
D. Lu
N. Polomac
Iskra Gacheva
E. Hattingen
Jochen Triesch
18
18
0
17 Jun 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
27
31
0
16 Jun 2020
Explainable deep learning models in medical image analysis
Explainable deep learning models in medical image analysis
Amitojdeep Singh
S. Sengupta
Vasudevan Lakshminarayanan
XAI
35
482
0
28 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
38
120
0
26 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
12
Next