Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.03292
Cited By
Sanity Checks for Saliency Maps
8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sanity Checks for Saliency Maps"
50 / 357 papers shown
Title
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
29
1
0
19 Sep 2022
TCAM: Temporal Class Activation Maps for Object Localization in Weakly-Labeled Unconstrained Videos
Soufiane Belharbi
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
WSOL
41
12
0
30 Aug 2022
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning
Xumeng Wang
Wei Chen
Jiazhi Xia
Zhen Wen
Rongchen Zhu
Tobias Schreck
FedML
26
20
0
16 Aug 2022
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks
Lei Jiang
Yongqing Liu
Shihai Xiao
Yansong Chua
30
0
0
14 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAtt
XAI
28
4
0
12 Aug 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
31
31
0
07 Aug 2022
ferret: a Framework for Benchmarking Explainers on Transformers
Giuseppe Attanasio
Eliana Pastor
C. Bonaventura
Debora Nozza
33
30
0
02 Aug 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
26
62
0
29 Jul 2022
Adaptive occlusion sensitivity analysis for visually explaining video recognition networks
Tomoki Uchiyama
Naoya Sogi
S. Iizuka
Koichiro Niinuma
Kazuhiro Fukui
24
2
0
26 Jul 2022
ScoreCAM GNN: une explication optimale des réseaux profonds sur graphes
Adrien Raison
Pascal Bourdon
David Helbert
FAtt
GNN
27
0
0
26 Jul 2022
LightX3ECG: A Lightweight and eXplainable Deep Learning System for 3-lead Electrocardiogram Classification
Khiem H. Le
Hieu H. Pham
Thao BT. Nguyen
Tu Nguyen
T. Thanh
Cuong D. Do
18
34
0
25 Jul 2022
XG-BoT: An Explainable Deep Graph Neural Network for Botnet Detection and Forensics
Wai Weng Lo
Gayan K. Kulatilleke
Mohanad Sarhan
S. Layeghy
Marius Portmann
26
41
0
19 Jul 2022
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
Isha Hameed
Samuel Sharpe
Daniel Barcklow
Justin Au-yeung
Sahil Verma
Jocelyn Huang
Brian Barr
C. Bayan Bruss
35
14
0
12 Jul 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
37
0
0
04 Jul 2022
Distilling Model Failures as Directions in Latent Space
Saachi Jain
Hannah Lawrence
Ankur Moitra
A. Madry
23
90
0
29 Jun 2022
"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Leilani H. Gilpin
Andrew R. Paley
M. A. Alam
Sarah Spurlock
Kristian J. Hammond
XAI
24
6
0
27 Jun 2022
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
42
7
0
27 Jun 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
35
2
0
15 Jun 2022
A Functional Information Perspective on Model Interpretation
Itai Gat
Nitay Calderon
Roi Reichart
Tamir Hazan
AAML
FAtt
33
6
0
12 Jun 2022
Towards better Interpretable and Generalizable AD detection using Collective Artificial Intelligence
H. Nguyen
Michael Clement
Boris Mansencal
Pierrick Coupé
MedIm
33
6
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
37
9
0
06 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
24
6
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
22
24
0
05 Jun 2022
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency Towards Causal Explanations of Probabilistic Forecasts
Chirag Raman
Hayley Hung
Marco Loog
24
3
0
01 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
Comparing interpretation methods in mental state decoding analyses with deep learning models
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
13
2
0
31 May 2022
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
22
8
0
25 May 2022
Faithful Explanations for Deep Graph Models
Zifan Wang
Yuhang Yao
Chaoran Zhang
Han Zhang
Youjie Kang
Carlee Joe-Wong
Matt Fredrikson
Anupam Datta
FAtt
24
2
0
24 May 2022
What You See is What You Classify: Black Box Attributions
Steven Stalder
Nathanael Perraudin
R. Achanta
Fernando Perez-Cruz
Michele Volpi
FAtt
32
9
0
23 May 2022
B-cos Networks: Alignment is All We Need for Interpretability
Moritz D Boehle
Mario Fritz
Bernt Schiele
42
85
0
20 May 2022
Cardinality-Minimal Explanations for Monotonic Neural Networks
Ouns El Harzli
Bernardo Cuenca Grau
Ian Horrocks
FAtt
38
5
0
19 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang
Bang Wu
Xingliang Yuan
Shirui Pan
Hanghang Tong
Jian Pei
45
104
0
16 May 2022
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?
Alvin Chan
Yew-Soon Ong
Clement Tan
AAML
24
13
0
09 May 2022
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
19
25
0
30 Apr 2022
Learning to Scaffold: Optimizing Model Explanations for Teaching
Patrick Fernandes
Marcos Vinícius Treviso
Danish Pruthi
André F. T. Martins
Graham Neubig
FAtt
25
22
0
22 Apr 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Interpretability of Machine Learning Methods Applied to Neuroimaging
Elina Thibeau-Sutre
S. Collin
Ninon Burgos
O. Colliot
16
4
0
14 Apr 2022
Visualizing Deep Neural Networks with Topographic Activation Maps
A. Krug
Raihan Kabir Ratul
Christopher Olson
Sebastian Stober
FAtt
AI4CE
33
3
0
07 Apr 2022
Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis
E. Schoop
Xin Zhou
Gang Li
Zhourong Chen
Björn Hartmann
Yang Li
HAI
FAtt
32
32
0
05 Apr 2022
Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
Yuansheng Xie
Soroush Vosoughi
Saeed Hassanpour
21
2
0
30 Mar 2022
Visualizing Global Explanations of Point Cloud DNNs
Hanxiao Tan
3DPC
40
7
0
17 Mar 2022
Controlling the Focus of Pretrained Language Generation Models
Jiabao Ji
Yoon Kim
James R. Glass
Tianxing He
32
5
0
02 Mar 2022
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
21
22
0
22 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
DermX: an end-to-end framework for explainable automated dermatological diagnosis
Raluca Jalaboi
F. Faye
Mauricio Orbes-Arteaga
D. Jørgensen
Ole Winther
A. Galimzianova
MedIm
16
17
0
14 Feb 2022
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
22
11
0
08 Feb 2022
Towards a consistent interpretation of AIOps models
Yingzhe Lyu
Gopi Krishnan Rajbahadur
Dayi Lin
Boyuan Chen
Zhen Ming
Z. Jiang
AI4CE
22
19
0
04 Feb 2022
Previous
1
2
3
4
5
6
7
8
Next