ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.06321
  4. Cited By
Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
    XAI
ArXivPDFHTML

Papers citing "Evaluating the visualization of what a Deep Neural Network has learned"

50 / 510 papers shown
Title
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
8
56
0
14 Aug 2020
ExplAIn: Explanatory Artificial Intelligence for Diabetic Retinopathy
  Diagnosis
ExplAIn: Explanatory Artificial Intelligence for Diabetic Retinopathy Diagnosis
G. Quellec
Hassan Al Hajj
M. Lamard
Pierre-Henri Conze
P. Massin
B. Cochener
25
54
0
13 Aug 2020
More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for
  Object Recognition
More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for Object Recognition
Hendrik Heuer
Andreas Breiter
HAI
17
9
0
05 Aug 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
16
71
0
03 Aug 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
A. Markus
J. Kors
P. Rijnbeek
15
455
0
31 Jul 2020
Split and Expand: An inference-time improvement for Weakly Supervised
  Cell Instance Segmentation
Split and Expand: An inference-time improvement for Weakly Supervised Cell Instance Segmentation
Lin Geng Foo
Rui En Ho
Jiamei Sun
Alexander Binder
13
0
0
21 Jul 2020
Fairwashing Explanations with Off-Manifold Detergent
Fairwashing Explanations with Off-Manifold Detergent
Christopher J. Anders
Plamen Pasliev
Ann-Kathrin Dombrowski
K. Müller
Pan Kessel
FAtt
FaML
16
94
0
20 Jul 2020
Technologies for Trustworthy Machine Learning: A Survey in a
  Socio-Technical Context
Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context
Ehsan Toreini
Mhairi Aitken
Kovila P. L. Coopamootoo
Karen Elliott
Vladimiro González-Zelaya
P. Missier
Magdalene Ng
Aad van Moorsel
31
17
0
17 Jul 2020
Evaluation for Weakly Supervised Object Localization: Protocol, Metrics,
  and Datasets
Evaluation for Weakly Supervised Object Localization: Protocol, Metrics, and Datasets
Junsuk Choe
Seong Joon Oh
Sanghyuk Chun
Seungho Lee
Zeynep Akata
Hyunjung Shim
WSOL
336
23
0
08 Jul 2020
Solving the Order Batching and Sequencing Problem using Deep
  Reinforcement Learning
Solving the Order Batching and Sequencing Problem using Deep Reinforcement Learning
Bram Cals
Yingqian Zhang
R. Dijkman
Claudy van Dorst
OffRL
9
29
0
16 Jun 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
27
31
0
16 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
32
215
0
05 Jun 2020
Black-box Explanation of Object Detectors via Saliency Maps
Black-box Explanation of Object Detectors via Saliency Maps
Vitali Petsiuk
R. Jain
Varun Manjunatha
Vlad I. Morariu
Ashutosh Mehra
Vicente Ordonez
Kate Saenko
FAtt
14
122
0
05 Jun 2020
Evaluations and Methods for Explanation through Robustness Analysis
Evaluations and Methods for Explanation through Robustness Analysis
Cheng-Yu Hsieh
Chih-Kuan Yeh
Xuanqing Liu
Pradeep Ravikumar
Seungyeon Kim
Sanjiv Kumar
Cho-Jui Hsieh
XAI
15
58
0
31 May 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
20
266
0
29 May 2020
Large scale evaluation of importance maps in automatic speech
  recognition
Large scale evaluation of importance maps in automatic speech recognition
V. Trinh
Michael I. Mandel
8
4
0
21 May 2020
Towards explainable classifiers using the counterfactual approach --
  global explanations for discovering bias in data
Towards explainable classifiers using the counterfactual approach -- global explanations for discovering bias in data
Agnieszka Mikołajczyk
M. Grochowski
Arkadiusz Kwasigroch
FAtt
CML
12
3
0
05 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
33
218
0
01 May 2020
Towards Visually Explaining Video Understanding Networks with
  Perturbation
Towards Visually Explaining Video Understanding Networks with Perturbation
Zhenqiang Li
Weimin Wang
Zuoyue Li
Yifei Huang
Yoichi Sato
FAtt
20
3
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
Assessing the Reliability of Visual Explanations of Deep Models with
  Adversarial Perturbations
Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations
Dan Valle
Tiago Pimentel
Adriano Veloso
FAtt
XAI
AAML
20
3
0
22 Apr 2020
Understanding Integrated Gradients with SmoothTaylor for Deep Neural
  Network Attribution
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution
Gary S. W. Goh
Sebastian Lapuschkin
Leander Weber
Wojciech Samek
Alexander Binder
FAtt
6
34
0
22 Apr 2020
Approximate Inverse Reinforcement Learning from Vision-based Imitation
  Learning
Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning
Keuntaek Lee
Bogdan I. Vlahov
Jason Gibson
James M. Rehg
Evangelos A. Theodorou
6
10
0
17 Apr 2020
DeepStreamCE: A Streaming Approach to Concept Evolution Detection in
  Deep Neural Networks
DeepStreamCE: A Streaming Approach to Concept Evolution Detection in Deep Neural Networks
Lorraine Chambers
M. Gaber
Z. Abdallah
21
4
0
08 Apr 2020
Generating Hierarchical Explanations on Text Classification via Feature
  Interaction Detection
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
36
91
0
04 Apr 2020
Continual Learning with Node-Importance based Adaptive Group Sparse
  Regularization
Continual Learning with Node-Importance based Adaptive Group Sparse Regularization
Sangwon Jung
Hongjoon Ahn
Sungmin Cha
Taesup Moon
CLL
17
120
0
30 Mar 2020
Layerwise Knowledge Extraction from Deep Convolutional Networks
Layerwise Knowledge Extraction from Deep Convolutional Networks
S. Odense
Artur Garcez
FAtt
16
9
0
19 Mar 2020
Overinterpretation reveals image classification model pathologies
Overinterpretation reveals image classification model pathologies
Brandon Carter
Siddhartha Jain
Jonas W. Mueller
David K Gifford
FAtt
20
50
0
19 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
GAMI-Net: An Explainable Neural Network based on Generalized Additive
  Models with Structured Interactions
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
Zebin Yang
Aijun Zhang
Agus Sudjianto
FAtt
19
126
0
16 Mar 2020
Measuring and improving the quality of visual explanations
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
14
3
0
14 Mar 2020
IROF: a low resource evaluation metric for explanation methods
IROF: a low resource evaluation metric for explanation methods
Laura Rieger
Lars Kai Hansen
20
55
0
09 Mar 2020
A Survey on Deep Hashing Methods
A Survey on Deep Hashing Methods
Xiao Luo
Haixin Wang
Huasong Zhong
C. L. Philip Chen
Minghua Deng
Jianqiang Huang
Xiansheng Hua
33
170
0
04 Mar 2020
Breaking Batch Normalization for better explainability of Deep Neural
  Networks through Layer-wise Relevance Propagation
Breaking Batch Normalization for better explainability of Deep Neural Networks through Layer-wise Relevance Propagation
M. Guillemot
C. Heusele
R. Korichi
S. Schnebert
Liming Luke Chen
FAtt
6
18
0
24 Feb 2020
Interpreting Interpretations: Organizing Attribution Methods by Criteria
Interpreting Interpretations: Organizing Attribution Methods by Criteria
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
Matt Fredrikson
XAI
FAtt
11
17
0
19 Feb 2020
Supporting DNN Safety Analysis and Retraining through Heatmap-based
  Unsupervised Learning
Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning
Hazem M. Fahmy
F. Pastore
M. Bagherzadeh
Lionel C. Briand
AI4CE
AAML
27
30
0
03 Feb 2020
Evaluating Weakly Supervised Object Localization Methods Right
Evaluating Weakly Supervised Object Localization Methods Right
Junsuk Choe
Seong Joon Oh
Seungho Lee
Sanghyuk Chun
Zeynep Akata
Hyunjung Shim
WSOL
289
186
0
21 Jan 2020
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attribution
Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
FAtt
6
182
0
02 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
13
198
0
18 Dec 2019
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
33
50
0
16 Dec 2019
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps
  for Deep Reinforcement Learning
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning
Akanksha Atrey
Kaleigh Clary
David D. Jensen
FAtt
LRM
11
90
0
09 Dec 2019
Counterfactual Explanation Algorithms for Behavioral and Textual Data
Counterfactual Explanation Algorithms for Behavioral and Textual Data
Yanou Ramon
David Martens
F. Provost
Theodoros Evgeniou
FAtt
17
87
0
04 Dec 2019
Explainable artificial intelligence model to predict acute critical
  illness from electronic health records
Explainable artificial intelligence model to predict acute critical illness from electronic health records
S. Lauritsen
Mads Kristensen
Mathias Vassard Olsen
Morten Skaarup Larsen
K. M. Lauritsen
Marianne Johansson Jørgensen
Jeppe Lange
B. Thiesson
19
297
0
03 Dec 2019
Sanity Checks for Saliency Metrics
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
14
167
0
29 Nov 2019
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAU
FaML
XAI
AAML
19
354
0
27 Nov 2019
Efficient Saliency Maps for Explainable AI
Efficient Saliency Maps for Explainable AI
T. Nathan Mundhenk
Barry Y. Chen
Gerald Friedland
XAI
FAtt
21
73
0
26 Nov 2019
Improving Feature Attribution through Input-specific Network Pruning
Improving Feature Attribution through Input-specific Network Pruning
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
S. T. Kim
Nassir Navab
FAtt
6
11
0
25 Nov 2019
A psychophysics approach for quantitative comparison of interpretable
  computer vision models
A psychophysics approach for quantitative comparison of interpretable computer vision models
F. Biessmann
D. Refiano
6
5
0
24 Nov 2019
Previous
123...1011789
Next