Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.03292
Cited By
Sanity Checks for Saliency Maps
8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sanity Checks for Saliency Maps"
50 / 350 papers shown
Title
Implet: A Post-hoc Subsequence Explainer for Time Series Models
Fanyu Meng
Ziwen Kan
Shahbaz Rezaei
Z. Kong
Xin Chen
Xin Liu
AI4TS
24
0
0
13 May 2025
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Moritz Vandenhirtz
Julia E. Vogt
38
0
0
09 May 2025
PhysNav-DG: A Novel Adaptive Framework for Robust VLM-Sensor Fusion in Navigation Applications
Trisanth Srinivasan
Santosh Patapati
34
0
0
03 May 2025
Interpretable graph-based models on multimodal biomedical data integration: A technical review and benchmarking
Alireza Sadeghi
F. Hajati
A. Argha
Nigel H Lovell
Min Yang
Hamid Alinejad-Rokny
41
0
0
03 May 2025
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
32
0
0
29 Apr 2025
Gradient Attention Map Based Verification of Deep Convolutional Neural Networks with Application to X-ray Image Datasets
Omid Halimi Milani
Amanda Nikho
Lauren Mills
Marouane Tliba
Ahmet Enis Cetin
Mohammed H. Elnagar
MedIm
38
0
0
29 Apr 2025
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
167
0
0
24 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
27
0
0
18 Apr 2025
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
Shri Kiran Srinivasan
David Chen
Thomas Statchen
Michael C. Burkhart
Nipun Bhandari
Bashar Ramadan
Brett Beaulieu-Jones
37
1
0
11 Apr 2025
Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
Joshua Hatherley
31
1
0
31 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Guillaume Jeanneret
Loïc Simon
F. Jurie
ViT
49
0
0
24 Feb 2025
Reliable Explainability of Deep Learning Spatial-Spectral Classifiers for Improved Semantic Segmentation in Autonomous Driving
Jon Gutiérrez-Zaballa
Koldo Basterretxea
Javier Echanobe
79
0
0
21 Feb 2025
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
67
0
0
17 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
147
0
0
17 Feb 2025
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
The Curious Case of Arbitrariness in Machine Learning
Prakhar Ganesh
Afaf Taik
G. Farnadi
59
2
0
28 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
100
0
0
30 Dec 2024
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
81
0
0
23 Nov 2024
GraphXAIN: Narratives to Explain Graph Neural Networks
Mateusz Cedro
David Martens
49
0
0
04 Nov 2024
LLMScan: Causal Scan for LLM Misbehavior Detection
Mengdi Zhang
Kai Kiat Goh
Peixin Zhang
Jun Sun
Rose Lin Xin
Hongyu Zhang
23
0
0
22 Oct 2024
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
47
0
0
10 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
84
1
0
09 Oct 2024
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
38
0
0
03 Oct 2024
Counterfactual Token Generation in Large Language Models
Ivi Chatzi
N. C. Benz
Eleni Straitouri
Stratis Tsirtsis
Manuel Gomez Rodriguez
LRM
34
3
0
25 Sep 2024
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
59
0
0
16 Sep 2024
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
95
0
0
14 Sep 2024
The Evolution of Reinforcement Learning in Quantitative Finance: A Survey
Nikolaos Pippas
Cagatay Turkay
Elliot A. Ludvig
AIFin
89
3
0
20 Aug 2024
A Weakly Supervised and Globally Explainable Learning Framework for Brain Tumor Segmentation
Andrea Failla
Salvatore Citraro
Xiaoxi He
Yi-Lun Pan
Yunpeng Cai
MedIm
34
4
0
02 Aug 2024
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
45
3
0
27 Jul 2024
Exploring the Plausibility of Hate and Counter Speech Detectors with Explainable AI
Adrian Jaques Böck
D. Slijepcevic
Matthias Zeppelzauer
44
0
0
25 Jul 2024
Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
Yuhang Lu
Zewei Xu
Touradj Ebrahimi
CVBM
FAtt
XAI
44
3
0
08 Jul 2024
Amazing Things Come From Having Many Good Models
Cynthia Rudin
Chudi Zhong
Lesia Semenova
Margo Seltzer
Ronald E. Parr
Jiachang Liu
Srikar Katta
Jon Donnelly
Harry Chen
Zachery Boner
28
23
0
05 Jul 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
45
0
0
17 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
37
4
0
27 May 2024
Exposing Image Classifier Shortcuts with Counterfactual Frequency (CoF) Tables
James Hinns
David Martens
46
2
0
24 May 2024
Learned feature representations are biased by complexity, learning order, position, and more
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Katherine Hermann
AI4CE
FaML
SSL
OOD
34
6
0
09 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
42
5
0
03 May 2024
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
Robust Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
23
0
0
03 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
61
5
0
02 May 2024
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
42
2
0
06 Apr 2024
Influence based explainability of brain tumors segmentation in multimodal Magnetic Resonance Imaging
Tommaso Torda
Andrea Ciardiello
Simona Gargiulo
Greta Grillo
Simone Scardapane
Cecilia Voena
S. Giagu
29
0
0
05 Apr 2024
CAM-Based Methods Can See through Walls
Magamed Taimeskhanov
R. Sicre
Damien Garreau
21
1
0
02 Apr 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Explainable Learning with Gaussian Processes
Kurt Butler
Guanchao Feng
P. Djuric
33
1
0
11 Mar 2024
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
36
11
0
19 Jan 2024
1
2
3
4
5
6
7
Next