ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03307
  4. Cited By
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

8 October 2018
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
    FAtt
    AAML
ArXivPDFHTML

Papers citing "Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values"

26 / 26 papers shown
Title
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Hanxiao Tan
39
1
0
29 Apr 2024
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation
  in Deep Feature Space
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space
Pedro Valois
Koichiro Niinuma
Kazuhiro Fukui
AAML
24
4
0
25 Nov 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
29
12
0
26 Jul 2023
SwiFT: Swin 4D fMRI Transformer
SwiFT: Swin 4D fMRI Transformer
P. Y. Kim
Junbeom Kwon
Sunghwan Joo
Sang-Peel Bae
Donggyu Lee
Yoonho Jung
Shinjae Yoo
Jiook Cha
Taesup Moon
MedIm
30
20
0
12 Jul 2023
The Generalizability of Explanations
The Generalizability of Explanations
Hanxiao Tan
FAtt
18
1
0
23 Feb 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
33
22
0
17 Jan 2023
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
  Shapley Value
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
28
31
0
07 Aug 2022
Reliable Visualization for Deep Speaker Recognition
Reliable Visualization for Deep Speaker Recognition
Pengqi Li
Lantian Li
A. Hamdulla
Dong Wang
HAI
40
9
0
08 Apr 2022
Metrics for saliency map evaluation of deep learning explanation methods
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAI
FAtt
69
41
0
31 Jan 2022
Global explainability in aligned image modalities
Global explainability in aligned image modalities
Justin Engelmann
Amos Storkey
Miguel O. Bernabeu
FAtt
22
4
0
17 Dec 2021
Accelerating Multi-Objective Neural Architecture Search by Random-Weight
  Evaluation
Accelerating Multi-Objective Neural Architecture Search by Random-Weight Evaluation
Shengran Hu
Ran Cheng
Cheng He
Zhichao Lu
Jing Wang
Miao Zhang
32
7
0
08 Oct 2021
BR-NPA: A Non-Parametric High-Resolution Attention Model to improve the
  Interpretability of Attention
BR-NPA: A Non-Parametric High-Resolution Attention Model to improve the Interpretability of Attention
T. Gomez
Suiyi Ling
Thomas Fréour
Harold Mouchère
12
5
0
04 Jun 2021
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
20
12
0
20 Aug 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
625
0
01 Jul 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
44
18
0
09 Jun 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,110
0
22 Oct 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
35
1,927
0
08 Oct 2018
xGEMs: Generating Examplars to Explain Black-Box Models
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
25
40
0
22 Jun 2018
A Note about: Local Explanation Methods for Deep Neural Networks lack
  Sensitivity to Parameter Values
A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values
Mukund Sundararajan
Ankur Taly
FAtt
11
21
0
11 Jun 2018
Bioinformatics and Medicine in the Era of Deep Learning
Bioinformatics and Medicine in the Era of Deep Learning
D. Bacciu
P. Lisboa
José D. Martín
R. Stoean
A. Vellido
AI4CE
BDL
33
17
0
27 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
77
1,791
0
30 Nov 2017
1