ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.06321
  4. Cited By
Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
    XAI
ArXivPDFHTML

Papers citing "Evaluating the visualization of what a Deep Neural Network has learned"

50 / 510 papers shown
Title
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
24
626
0
08 Nov 2019
Explanation by Progressive Exaggeration
Explanation by Progressive Exaggeration
Sumedha Singla
Brian Pollack
Junxiang Chen
Kayhan Batmanghelich
FAtt
MedIm
4
103
0
01 Nov 2019
Concept Saliency Maps to Visualize Relevant Features in Deep Generative
  Models
Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
L. Brocki
N. C. Chung
FAtt
20
21
0
29 Oct 2019
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural
  Networks
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks
Aya Abdelsalam Ismail
Mohamed K. Gunady
L. Pessoa
H. C. Bravo
S. Feizi
AI4TS
25
50
0
27 Oct 2019
Towards Best Practice in Explaining Neural Network Decisions with LRP
Towards Best Practice in Explaining Neural Network Decisions with LRP
M. Kohlbrenner
Alexander Bauer
Shinichi Nakajima
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
14
148
0
22 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Explaining image classifiers by removing input features using generative
  models
Explaining image classifiers by removing input features using generative models
Chirag Agarwal
Anh Totti Nguyen
FAtt
28
15
0
09 Oct 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
436
0
26 Sep 2019
Explaining and Interpreting LSTMs
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAtt
AI4TS
16
79
0
25 Sep 2019
Towards a Rigorous Evaluation of XAI Methods on Time Series
Towards a Rigorous Evaluation of XAI Methods on Time Series
U. Schlegel
Hiba Arnout
Mennatallah El-Assady
Daniela Oelke
Daniel A. Keim
XAI
AI4TS
15
169
0
16 Sep 2019
Explainable Deep Learning for Video Recognition Tasks: A Framework &
  Recommendations
Explainable Deep Learning for Video Recognition Tasks: A Framework & Recommendations
Liam Hiley
Alun D. Preece
Y. Hicks
XAI
11
15
0
07 Sep 2019
Neural Cognitive Diagnosis for Intelligent Education Systems
Neural Cognitive Diagnosis for Intelligent Education Systems
Fei-Yue Wang
Qi Liu
Enhong Chen
Zhenya Huang
Yuying Chen
Yu Yin
Zai Huang
Shijin Wang
AI4Ed
13
225
0
23 Aug 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
D. Ranasinghe
AAML
11
66
0
09 Aug 2019
Benchmarking Attribution Methods with Relative Feature Importance
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAtt
XAI
16
140
0
23 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
42
1,414
0
17 Jul 2019
Saliency Maps Generation for Automatic Text Summarization
Saliency Maps Generation for Automatic Text Summarization
David Tuckey
Krysia Broda
A. Russo
FAtt
10
3
0
12 Jul 2019
Global Aggregations of Local Explanations for Black Box models
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
11
63
0
05 Jul 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
20
328
0
19 Jun 2019
From Clustering to Cluster Explanations via Neural Networks
From Clustering to Cluster Explanations via Neural Networks
Jacob R. Kauffmann
Malte Esders
Lukas Ruff
G. Montavon
Wojciech Samek
K. Müller
24
68
0
18 Jun 2019
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
20
82
0
31 May 2019
Leveraging Latent Features for Local Explanations
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
41
37
0
29 May 2019
A Rate-Distortion Framework for Explaining Neural Network Decisions
A Rate-Distortion Framework for Explaining Neural Network Decisions
Jan Macdonald
S. Wäldchen
Sascha Hauch
Gitta Kutyniok
11
39
0
27 May 2019
Predicting Model Failure using Saliency Maps in Autonomous Driving
  Systems
Predicting Model Failure using Saliency Maps in Autonomous Driving Systems
Sina Mohseni
Akshay V. Jagadeesh
Zhangyang Wang
11
13
0
19 May 2019
Full-Gradient Representation for Neural Network Visualization
Full-Gradient Representation for Neural Network Visualization
Suraj Srinivas
F. Fleuret
MILM
FAtt
8
268
0
02 May 2019
Evaluating Recurrent Neural Network Explanations
Evaluating Recurrent Neural Network Explanations
L. Arras
Ahmed Osman
K. Müller
Wojciech Samek
XAI
FAtt
8
88
0
26 Apr 2019
Analysis and Visualization of Deep Neural Networks in Device-Free Wi-Fi
  Indoor Localization
Analysis and Visualization of Deep Neural Networks in Device-Free Wi-Fi Indoor Localization
Shing-Jiuan Liu
Ronald Y. Chang
Feng-Tsun Chien
14
21
0
23 Apr 2019
Uncovering convolutional neural network decisions for diagnosing
  multiple sclerosis on conventional MRI using layer-wise relevance propagation
Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation
Fabian Eitel
Emily Soehler
J. Bellmann-Strobl
A. Brandt
K. Ruprecht
...
M. Weygandt
J. Haynes
M. Scheel
Friedemann Paul
K. Ritter
19
131
0
18 Apr 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
25
11
0
09 Apr 2019
A Categorisation of Post-hoc Explanations for Predictive Models
A Categorisation of Post-hoc Explanations for Predictive Models
John Mitros
Brian Mac Namee
XAI
CML
14
0
0
04 Apr 2019
Relative Attributing Propagation: Interpreting the Comparative
  Contributions of Individual Units in Deep Neural Networks
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
Woo-Jeoung Nam
Shir Gur
Jaesik Choi
Lior Wolf
Seong-Whan Lee
FAtt
6
99
0
01 Apr 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
6
39
0
27 Mar 2019
Explaining Anomalies Detected by Autoencoders Using SHAP
Explaining Anomalies Detected by Autoencoders Using SHAP
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
Lior Rokach
FAtt
TDI
11
86
0
06 Mar 2019
Aggregating explanation methods for stable and robust explainability
Aggregating explanation methods for stable and robust explainability
Laura Rieger
Lars Kai Hansen
AAML
FAtt
32
11
0
01 Mar 2019
Deep learning in bioinformatics: introduction, application, and
  perspective in big data era
Deep learning in bioinformatics: introduction, application, and perspective in big data era
Yu-Hu Li
Chao Huang
Lizhong Ding
Zhongxiao Li
Yijie Pan
Xin Gao
AI4CE
21
295
0
28 Feb 2019
A novel method for extracting interpretable knowledge from a spiking
  neural classifier with time-varying synaptic weights
A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights
Abeegithan Jeyasothy
Suresh Sundaram
Savitha Ramasamy
N. Sundararajan
4
4
0
28 Feb 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really
  Learn
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Sebastian Lapuschkin
S. Wäldchen
Alexander Binder
G. Montavon
Wojciech Samek
K. Müller
17
996
0
26 Feb 2019
Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency
  Maps
Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps
Beomsu Kim
Junghoon Seo
Seunghyun Jeon
Jamyoung Koo
J. Choe
Taegyun Jeon
FAtt
16
69
0
13 Feb 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
8
201
0
06 Feb 2019
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of
  Key Ideas and Publications, and Bibliography for Explainable AI
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
Shane T. Mueller
R. Hoffman
W. Clancey
Abigail Emrey
Gary Klein
XAI
10
284
0
05 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
28
446
0
27 Jan 2019
Quantifying Interpretability and Trust in Machine Learning Systems
Quantifying Interpretability and Trust in Machine Learning Systems
Philipp Schmidt
F. Biessmann
6
111
0
20 Jan 2019
Discovering Molecular Functional Groups Using Graph Convolutional Neural
  Networks
Discovering Molecular Functional Groups Using Graph Convolutional Neural Networks
Phillip E. Pope
Soheil Kolouri
Mohammad Rostami
Charles E. Martin
Heiko Hoffmann
GNN
33
14
0
01 Dec 2018
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
23
102
0
28 Nov 2018
An Overview of Computational Approaches for Interpretation Analysis
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
21
2
0
09 Nov 2018
Looking Deeper into Deep Learning Model: Attribution-based Explanations
  of TextCNN
Looking Deeper into Deep Learning Model: Attribution-based Explanations of TextCNN
Wenting Xiong
Iftitahu Ni'mah
Juan M. G. Huesca
Werner van Ipenburg
Jan Veldsink
Mykola Pechenizkiy
FAtt
6
7
0
08 Nov 2018
LAMVI-2: A Visual Tool for Comparing and Tuning Word Embedding Models
LAMVI-2: A Visual Tool for Comparing and Tuning Word Embedding Models
Xin Rong
Joshua Luckson
Eytan Adar
VLM
8
2
0
22 Oct 2018
Logic Negation with Spiking Neural P Systems
Logic Negation with Spiking Neural P Systems
Daniel Rodríguez-Chavarría
Miguel A. Gutiérrez-Naranjo
J. Borrego-Díaz
NAI
11
3
0
18 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
37
1,927
0
08 Oct 2018
Explaining the Unique Nature of Individual Gait Patterns with Deep
  Learning
Explaining the Unique Nature of Individual Gait Patterns with Deep Learning
Fabian Horst
Sebastian Lapuschkin
Wojciech Samek
K. Müller
W. Schöllhorn
AI4CE
15
207
0
13 Aug 2018
iNNvestigate neural networks!
iNNvestigate neural networks!
Maximilian Alber
Sebastian Lapuschkin
P. Seegerer
Miriam Hagele
Kristof T. Schütt
G. Montavon
Wojciech Samek
K. Müller
Sven Dähne
Pieter-Jan Kindermans
8
348
0
13 Aug 2018
Previous
123...101189
Next