Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.14297
Cited By
Revisiting Sanity Checks for Saliency Maps
27 October 2021
G. Yona
D. Greenfeld
AAML
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Revisiting Sanity Checks for Saliency Maps"
7 / 7 papers shown
Title
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
100
0
0
30 Dec 2024
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
42
5
0
03 May 2024
Generalizing Backpropagation for Gradient-Based Interpretability
Kevin Du
Lucas Torroba Hennigen
Niklas Stoehr
Alex Warstadt
Ryan Cotterell
MILM
FAtt
29
7
0
06 Jul 2023
Explaining, Analyzing, and Probing Representations of Self-Supervised Learning Models for Sensor-based Human Activity Recognition
Bulat Khaertdinov
S. Asteriadis
26
3
0
14 Apr 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1