ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03292
  4. Cited By
Sanity Checks for Saliency Maps

Sanity Checks for Saliency Maps

8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
    FAtt
    AAML
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Maps"

50 / 357 papers shown
Title
3VL: Using Trees to Improve Vision-Language Models' Interpretability
3VL: Using Trees to Improve Vision-Language Models' Interpretability
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGe
VLM
49
4
0
28 Dec 2023
Explainable Multi-Camera 3D Object Detection with Transformer-Based
  Saliency Maps
Explainable Multi-Camera 3D Object Detection with Transformer-Based Saliency Maps
Till Beemelmanns
Wassim Zahr
Lutz Eckstein
32
0
0
22 Dec 2023
CAManim: Animating end-to-end network activation maps
CAManim: Animating end-to-end network activation maps
Emily Kaczmarek
Olivier X. Miguel
Alexa C. Bowie
R. Ducharme
Alysha L. J. Dingwall-Harvey
S. Hawken
Christine M. Armour
Mark C. Walker
Kevin Dick
HAI
26
1
0
19 Dec 2023
Is Ignorance Bliss? The Role of Post Hoc Explanation Faithfulness and
  Alignment in Model Trust in Laypeople and Domain Experts
Is Ignorance Bliss? The Role of Post Hoc Explanation Faithfulness and Alignment in Model Trust in Laypeople and Domain Experts
Tessa Han
Yasha Ektefaie
Maha Farhat
Marinka Zitnik
Himabindu Lakkaraju
FAtt
11
2
0
09 Dec 2023
Improving Interpretation Faithfulness for Vision Transformers
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
37
5
0
29 Nov 2023
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation
  in Deep Feature Space
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space
Pedro Valois
Koichiro Niinuma
Kazuhiro Fukui
AAML
24
4
0
25 Nov 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
26
0
0
20 Nov 2023
Auxiliary Losses for Learning Generalizable Concept-based Models
Auxiliary Losses for Learning Generalizable Concept-based Models
Ivaxi Sheth
Samira Ebrahimi Kahou
32
24
0
18 Nov 2023
Influence of Video Dynamics on EEG-based Single-Trial Video Target
  Surveillance System
Influence of Video Dynamics on EEG-based Single-Trial Video Target Surveillance System
Heon Kwak
Sung-Jin Kim
Hyeon-Taek Han
Ji-Hoon Jeong
Seong-Whan Lee
10
0
0
15 Nov 2023
SCAAT: Improving Neural Network Interpretability via Saliency
  Constrained Adaptive Adversarial Training
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
28
2
0
09 Nov 2023
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Eoin M. Kenny
Eoin Delaney
Markt. Keane
31
5
0
06 Nov 2023
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
  Visualizations
This Looks Like Those: Illuminating Prototypical Concepts Using Multiple Visualizations
Chiyu Ma
Brandon Zhao
Chaofan Chen
Cynthia Rudin
26
26
0
28 Oct 2023
LICO: Explainable Models with Language-Image Consistency
LICO: Explainable Models with Language-Image Consistency
Yiming Lei
Zilong Li
Yangyang Li
Junping Zhang
Hongming Shan
VLM
FAtt
17
7
0
15 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
32
1
0
26 Sep 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
82
7
0
14 Sep 2023
Automatic Concept Embedding Model (ACEM): No train-time concepts, No
  issue!
Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue!
Rishabh Jain
LRM
29
0
0
07 Sep 2023
Beyond XAI:Obstacles Towards Responsible AI
Beyond XAI:Obstacles Towards Responsible AI
Yulu Pi
34
2
0
07 Sep 2023
PDiscoNet: Semantically consistent part discovery for fine-grained
  recognition
PDiscoNet: Semantically consistent part discovery for fine-grained recognition
Robert van der Klis
Stephan Alaniz
Massimiliano Mancini
C. Dantas
Dino Ienco
Zeynep Akata
Diego Marcos
22
11
0
06 Sep 2023
DeViL: Decoding Vision features into Language
DeViL: Decoding Vision features into Language
Meghal Dani
Isabel Rio-Torto
Stephan Alaniz
Zeynep Akata
VLM
42
7
0
04 Sep 2023
WSAM: Visual Explanations from Style Augmentation as Adversarial
  Attacker and Their Influence in Image Classification
WSAM: Visual Explanations from Style Augmentation as Adversarial Attacker and Their Influence in Image Classification
Felipe Moreno-Vera
E. Medina
Jorge Poco
21
2
0
29 Aug 2023
Interpretation on Multi-modal Visual Fusion
Interpretation on Multi-modal Visual Fusion
Hao Chen
Hao Zhou
Yongjian Deng
36
0
0
19 Aug 2023
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust
  Neural Network Inference
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
Edouard Yvinec
Arnaud Dapogny
Kévin Bailly
Xavier Fischer
AAML
8
2
0
09 Aug 2023
Precise Benchmarking of Explainable AI Attribution Methods
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
27
4
0
06 Aug 2023
Unlearning Spurious Correlations in Chest X-ray Classification
Unlearning Spurious Correlations in Chest X-ray Classification
Misgina Tsighe Hagos
Kathleen M. Curran
Brian Mac Namee
CML
OOD
18
0
0
02 Aug 2023
Understanding Activation Patterns in Artificial Neural Networks by
  Exploring Stochastic Processes
Understanding Activation Patterns in Artificial Neural Networks by Exploring Stochastic Processes
S. Lehmler
Muhammad Saif-ur-Rehman
Tobias Glasmachers
Ioannis Iossifidis
24
0
0
01 Aug 2023
ProtoASNet: Dynamic Prototypes for Inherently Interpretable and
  Uncertainty-Aware Aortic Stenosis Classification in Echocardiography
ProtoASNet: Dynamic Prototypes for Inherently Interpretable and Uncertainty-Aware Aortic Stenosis Classification in Echocardiography
H. Vaseli
A. Gu
Ahmadi Amiri
Michael Y. Tsang
A. Fung
Nima Kondori
Armin Saadat
Purang Abolmaesumi
T. Tsang
33
12
0
26 Jul 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
29
12
0
26 Jul 2023
Uncovering Unique Concept Vectors through Latent Space Decomposition
Uncovering Unique Concept Vectors through Latent Space Decomposition
Mara Graziani
Laura Mahony
An-phi Nguyen
Henning Muller
Vincent Andrearczyk
43
4
0
13 Jul 2023
Robust Ranking Explanations
Robust Ranking Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
FAtt
AAML
35
0
0
08 Jul 2023
Exploring the Lottery Ticket Hypothesis with Explainability Methods:
  Insights into Sparse Network Performance
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
30
0
0
07 Jul 2023
Generalizing Backpropagation for Gradient-Based Interpretability
Generalizing Backpropagation for Gradient-Based Interpretability
Kevin Du
Lucas Torroba Hennigen
Niklas Stoehr
Alex Warstadt
Ryan Cotterell
MILM
FAtt
29
7
0
06 Jul 2023
Active Globally Explainable Learning for Medical Images via Class
  Association Embedding and Cyclic Adversarial Generation
Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation
Ruitao Xie
Jingbang Chen
Limai Jiang
Ru Xiao
Yi-Lun Pan
Yunpeng Cai
GAN
MedIm
24
0
0
12 Jun 2023
G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors
G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors
Quoc Khanh Nguyen
Hung Truong Thanh Nguyen
Truong Thanh Hung Nguyen
Van Binh Truong
Quoc Hung Cao
24
4
0
06 Jun 2023
Which Models have Perceptually-Aligned Gradients? An Explanation via
  Off-Manifold Robustness
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
Suraj Srinivas
Sebastian Bordt
Hima Lakkaraju
AAML
30
11
0
30 May 2023
Are Deep Neural Networks Adequate Behavioural Models of Human Visual
  Perception?
Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?
Felix Wichmann
Robert Geirhos
32
25
0
26 May 2023
Quantifying the Intrinsic Usefulness of Attributional Explanations for
  Graph Neural Networks with Artificial Simulatability Studies
Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies
Jonas Teufel
Luca Torresi
Pascal Friederich
FAtt
31
1
0
25 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
21
37
0
20 Apr 2023
One Explanation Does Not Fit XIL
One Explanation Does Not Fit XIL
Felix Friedrich
David Steinmann
Kristian Kersting
LRM
37
2
0
14 Apr 2023
Explaining, Analyzing, and Probing Representations of Self-Supervised
  Learning Models for Sensor-based Human Activity Recognition
Explaining, Analyzing, and Probing Representations of Self-Supervised Learning Models for Sensor-based Human Activity Recognition
Bulat Khaertdinov
S. Asteriadis
26
3
0
14 Apr 2023
MProtoNet: A Case-Based Interpretable Model for Brain Tumor
  Classification with 3D Multi-parametric Magnetic Resonance Imaging
MProtoNet: A Case-Based Interpretable Model for Brain Tumor Classification with 3D Multi-parametric Magnetic Resonance Imaging
Yuanyuan Wei
Roger Tam
Xiaoying Tang
MedIm
19
12
0
13 Apr 2023
Explanation of Face Recognition via Saliency Maps
Explanation of Face Recognition via Saliency Maps
Yuhang Lu
Touradj Ebrahimi
XAI
CVBM
13
3
0
12 Apr 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
26
7
0
12 Apr 2023
Coherent Concept-based Explanations in Medical Image and Its Application
  to Skin Lesion Diagnosis
Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis
Cristiano Patrício
João C. Neves
Luís F. Teixeira
MedIm
FAtt
24
17
0
10 Apr 2023
Hierarchical Disentanglement-Alignment Network for Robust SAR Vehicle
  Recognition
Hierarchical Disentanglement-Alignment Network for Robust SAR Vehicle Recognition
Wei-Jang Li
Wei Yang
Wenpeng Zhang
Tianpeng Liu
Yongxiang Liu
Yong-Jin Liu
30
18
0
07 Apr 2023
Are Data-driven Explanations Robust against Out-of-distribution Data?
Are Data-driven Explanations Robust against Out-of-distribution Data?
Tang Li
Fengchun Qiao
Mengmeng Ma
Xiangkai Peng
OODD
OOD
33
10
0
29 Mar 2023
UFO: A unified method for controlling Understandability and Faithfulness
  Objectives in concept-based explanations for CNNs
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
32
0
0
27 Mar 2023
Evaluating self-attention interpretability through human-grounded
  experimental protocol
Evaluating self-attention interpretability through human-grounded experimental protocol
Milan Bhan
Nina Achache
Victor Legrand
A. Blangero
N. Chesneau
26
9
0
27 Mar 2023
Take 5: Interpretable Image Classification with a Handful of Features
Take 5: Interpretable Image Classification with a Handful of Features
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
40
7
0
23 Mar 2023
The Representational Status of Deep Learning Models
The Representational Status of Deep Learning Models
Eamon Duede
19
0
0
21 Mar 2023
Attention-based Saliency Maps Improve Interpretability of Pneumothorax
  Classification
Attention-based Saliency Maps Improve Interpretability of Pneumothorax Classification
Alessandro Wollek
Robert Graf
Saša Čečatka
N. Fink
Theresa Willem
B. Sabel
Tobias Lasser
15
26
0
03 Mar 2023
Previous
12345678
Next