ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03292
  4. Cited By
Sanity Checks for Saliency Maps

Sanity Checks for Saliency Maps

8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
    FAtt
    AAML
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Maps"

50 / 357 papers shown
Title
SUNY: A Visual Interpretation Framework for Convolutional Neural
  Networks from a Necessary and Sufficient Perspective
SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Z. Kong
Kwan-Liu Ma
AAML
FAtt
35
2
0
01 Mar 2023
Don't be fooled: label leakage in explanation methods and the importance
  of their quantitative evaluation
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
N. Jethani
A. Saporta
Rajesh Ranganath
FAtt
29
10
0
24 Feb 2023
The Generalizability of Explanations
The Generalizability of Explanations
Hanxiao Tan
FAtt
18
1
0
23 Feb 2023
sMRI-PatchNet: A novel explainable patch-based deep learning network for
  Alzheimer's disease diagnosis and discriminative atrophy localisation with
  Structural MRI
sMRI-PatchNet: A novel explainable patch-based deep learning network for Alzheimer's disease diagnosis and discriminative atrophy localisation with Structural MRI
Xin Zhang
Liangxiu Han
Lianghao Han
Haoming Chen
Darren Dancey
Daoqiang Zhang
MedIm
18
4
0
17 Feb 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
34
1
0
17 Feb 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
34
3
0
14 Feb 2023
Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
  on AI-based Recruitment
Human-Centric Multimodal Machine Learning: Recent Advances and Testbed on AI-based Recruitment
Alejandro Peña
Ignacio Serna
Aythami Morales
Julian Fierrez
Alfonso Ortega
Ainhoa Herrarte
Manuel Alcántara
J. Ortega-Garcia
FaML
25
34
0
13 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate
  image models
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
22
4
0
11 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
Variational Information Pursuit for Interpretable Predictions
Variational Information Pursuit for Interpretable Predictions
Aditya Chattopadhyay
Kwan Ho Ryan Chan
B. Haeffele
D. Geman
René Vidal
DRL
21
10
0
06 Feb 2023
Stop overkilling simple tasks with black-box models and use transparent models instead
Matteo Rizzo
Matteo Marcuzzo
A. Zangari
A. Gasparetto
A. Albarelli
VLM
21
0
0
06 Feb 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
35
29
0
28 Jan 2023
Holistically Explainable Vision Transformers
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
35
9
0
20 Jan 2023
TAME: Attention Mechanism Based Feature Fusion for Generating
  Explanation Maps of Convolutional Neural Networks
TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks
Mariano V. Ntrougkas
Nikolaos Gkalelis
Vasileios Mezaris
FAtt
19
8
0
18 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
36
22
0
17 Jan 2023
Towards Reconciling Usability and Usefulness of Explainable AI
  Methodologies
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
28
8
0
13 Jan 2023
Saliency-Augmented Memory Completion for Continual Learning
Saliency-Augmented Memory Completion for Continual Learning
Guangji Bai
Chen Ling
Yuyang Gao
Liang Zhao
CLL
40
4
0
26 Dec 2022
Impossibility Theorems for Feature Attribution
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
35
15
0
16 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
26
18
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
24
1
0
15 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
31
14
0
13 Dec 2022
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
22
9
0
13 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
This changes to that : Combining causal and non-causal explanations to
  generate disease progression in capsule endoscopy
This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy
Anuja Vats
A. Mohammed
Marius Pedersen
Nirmalie Wiratunga
MedIm
29
9
0
05 Dec 2022
Understanding and Enhancing Robustness of Concept-based Models
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
28
18
0
29 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
30
5
0
29 Nov 2022
Interactive Visual Feature Search
Interactive Visual Feature Search
Devon Ulrich
Ruth C. Fong
FAtt
23
0
0
28 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
MEGAN: Multi-Explanation Graph Attention Network
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
26
8
0
23 Nov 2022
ModelDiff: A Framework for Comparing Learning Algorithms
ModelDiff: A Framework for Comparing Learning Algorithms
Harshay Shah
Sung Min Park
Andrew Ilyas
A. Madry
SyDa
51
26
0
22 Nov 2022
Do graph neural networks learn traditional jet substructure?
Do graph neural networks learn traditional jet substructure?
Farouk Mokhtar
Raghav Kansal
Javier Mauricio Duarte
GNN
34
11
0
17 Nov 2022
Data-Centric Debugging: mitigating model failures via targeted data
  collection
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
24
4
0
17 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Interpretable Few-shot Learning with Online Attribute Selection
Interpretable Few-shot Learning with Online Attribute Selection
M. Zarei
Majid Komeili
FAtt
35
1
0
16 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
29
17
0
06 Nov 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
26
1
0
31 Oct 2022
Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for
  Industrial Insertion of Novel Connectors from Vision
Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Ashvin Nair
Brian Zhu
Gokul Narayanan
Eugen Solowjow
Sergey Levine
OffRL
OnRL
28
14
0
27 Oct 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
47
39
0
24 Oct 2022
Hierarchical Neyman-Pearson Classification for Prioritizing Severe
  Disease Categories in COVID-19 Patient Data
Hierarchical Neyman-Pearson Classification for Prioritizing Severe Disease Categories in COVID-19 Patient Data
Lijia Wang
Y. X. R. Wang
Jingyi Jessica Li
Xin Tong
21
1
0
01 Oct 2022
Variance Covariance Regularization Enforces Pairwise Independence in
  Self-Supervised Representations
Variance Covariance Regularization Enforces Pairwise Independence in Self-Supervised Representations
Grégoire Mialon
Randall Balestriero
Yann LeCun
32
9
0
29 Sep 2022
Formal Conceptual Views in Neural Networks
Formal Conceptual Views in Neural Networks
Johannes Hirth
Tom Hanika
15
2
0
27 Sep 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
32
29
0
26 Sep 2022
Ablation Path Saliency
Ablation Path Saliency
Justus Sagemüller
Olivier Verdier
FAtt
AAML
11
0
0
26 Sep 2022
I-SPLIT: Deep Network Interpretability for Split Computing
I-SPLIT: Deep Network Interpretability for Split Computing
Federico Cunico
Luigi Capogrosso
Francesco Setti
D. Carra
Franco Fummi
Marco Cristani
29
14
0
23 Sep 2022
Learning Visual Explanations for DCNN-Based Image Classifiers Using an
  Attention Mechanism
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism
Ioanna Gkartzonika
Nikolaos Gkalelis
Vasileios Mezaris
30
9
0
22 Sep 2022
XClusters: Explainability-first Clustering
XClusters: Explainability-first Clustering
Hyunseung Hwang
Steven Euijong Whang
30
5
0
22 Sep 2022
Previous
12345678
Next