ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.14297
  4. Cited By
Revisiting Sanity Checks for Saliency Maps

Revisiting Sanity Checks for Saliency Maps

27 October 2021
G. Yona
D. Greenfeld
    AAML
    FAtt
ArXivPDFHTML

Papers citing "Revisiting Sanity Checks for Saliency Maps"

7 / 7 papers shown
Title
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
100
0
0
30 Dec 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
42
5
0
03 May 2024
Generalizing Backpropagation for Gradient-Based Interpretability
Generalizing Backpropagation for Gradient-Based Interpretability
Kevin Du
Lucas Torroba Hennigen
Niklas Stoehr
Alex Warstadt
Ryan Cotterell
MILM
FAtt
29
7
0
06 Jul 2023
Explaining, Analyzing, and Probing Representations of Self-Supervised
  Learning Models for Sensor-based Human Activity Recognition
Explaining, Analyzing, and Probing Representations of Self-Supervised Learning Models for Sensor-based Human Activity Recognition
Bulat Khaertdinov
S. Asteriadis
26
3
0
14 Apr 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1