ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.10758
  4. Cited By
A Benchmark for Interpretability Methods in Deep Neural Networks

A Benchmark for Interpretability Methods in Deep Neural Networks

28 June 2018
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
    FAtt
    UQCV
ArXivPDFHTML

Papers citing "A Benchmark for Interpretability Methods in Deep Neural Networks"

43 / 143 papers shown
Title
FairCanary: Rapid Continuous Explainable Fairness
FairCanary: Rapid Continuous Explainable Fairness
Avijit Ghosh
Aalok Shanbhag
Christo Wilson
11
20
0
13 Jun 2021
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness,
  and Semantic Evaluation
On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation
Wei Zhang
Ziming Huang
Yada Zhu
Guangnan Ye
Xiaodong Cui
Fan Zhang
31
17
0
09 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
29
91
0
01 Jun 2021
To trust or not to trust an explanation: using LEAF to evaluate local
  linear XAI methods
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
33
68
0
01 Jun 2021
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Thorben Funke
Megha Khosla
Mandeep Rathee
Avishek Anand
FAtt
23
38
0
18 May 2021
Sanity Simulations for Saliency Methods
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
41
17
0
13 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
33
132
0
27 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
30
6
0
19 Apr 2021
Explaining COVID-19 and Thoracic Pathology Model Predictions by
  Identifying Informative Input Features
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features
Ashkan Khakzar
Yang Zhang
W. Mansour
Yuezhi Cai
Yawei Li
Yucheng Zhang
Seong Tae Kim
Nassir Navab
FAtt
52
17
0
01 Apr 2021
Deep learning on fundus images detects glaucoma beyond the optic disc
Deep learning on fundus images detects glaucoma beyond the optic disc
Ruben Hemelings
B. Elen
J. Barbosa-Breda
Matthew B. Blaschko
P. Boever
Ingeborg Stalmans
MedIm
28
59
0
22 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
32
29
0
10 Mar 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
88
69
0
02 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
28
57
0
25 Feb 2021
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
  Models on MIMIC-IV Dataset
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset
Chuizheng Meng
Loc Trinh
Nan Xu
Yan Liu
24
30
0
12 Feb 2021
Convolutional Neural Network Interpretability with General Pattern
  Theory
Convolutional Neural Network Interpretability with General Pattern Theory
Erico Tjoa
Cuntai Guan
FAtt
AI4CE
18
6
0
05 Feb 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
35
63
0
18 Dec 2020
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
45
644
0
17 Dec 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
50
243
0
21 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
47
7
0
23 Oct 2020
Feature Importance Ranking for Deep Learning
Feature Importance Ranking for Deep Learning
Maksymilian Wojtas
Ke Chen
147
115
0
18 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
59
174
0
12 Oct 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based
  Approach
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
25
9
0
29 Sep 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
19
27
0
07 Sep 2020
Debiasing Concept-based Explanations with Causal Analysis
Debiasing Concept-based Explanations with Causal Analysis
M. T. Bahadori
David Heckerman
FAtt
CML
19
38
0
22 Jul 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei Wang
AAML
57
18
0
09 Jun 2020
Explaining AI-based Decision Support Systems using Concept Localization
  Maps
Explaining AI-based Decision Support Systems using Concept Localization Maps
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
27
26
0
04 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
43
371
0
30 Apr 2020
Dendrite Net: A White-Box Module for Classification, Regression, and
  System Identification
Dendrite Net: A White-Box Module for Classification, Regression, and System Identification
Gang Liu
Junchang Wang
28
60
0
08 Apr 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
51
82
0
17 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
Measuring and improving the quality of visual explanations
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
24
3
0
14 Mar 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
37
207
0
15 Jan 2020
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps
  for Deep Reinforcement Learning
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning
Akanksha Atrey
Kaleigh Clary
David D. Jensen
FAtt
LRM
19
90
0
09 Dec 2019
Explainable AI for Trees: From Local Explanations to Global
  Understanding
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
28
286
0
11 May 2019
On the (In)fidelity and Sensitivity for Explanations
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
Revisiting the Importance of Individual Units in CNNs via Ablation
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
59
115
0
07 Jun 2018
Previous
123