ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.01451
  4. Cited By
Sanity Checks for Saliency Metrics

Sanity Checks for Saliency Metrics

29 November 2019
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Metrics"

28 / 28 papers shown
Title
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
62
4
0
10 Jan 2025
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
54
1
0
03 Nov 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
61
5
0
02 May 2024
How should AI decisions be explained? Requirements for Explanations from
  the Perspective of European Law
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Benjamin Frész
Elena Dubovitskaya
Danilo Brajovic
Marco F. Huber
Christian Horz
57
7
0
19 Apr 2024
A comprehensive study on fidelity metrics for XAI
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
36
11
0
19 Jan 2024
I-CEE: Tailoring Explanations of Image Classification Models to User
  Expertise
I-CEE: Tailoring Explanations of Image Classification Models to User Expertise
Yao Rong
Peizhu Qian
Vaibhav Unhelkar
Enkelejda Kasneci
34
0
0
19 Dec 2023
SCAAT: Improving Neural Network Interpretability via Saliency
  Constrained Adaptive Adversarial Training
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
28
2
0
09 Nov 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
37
21
0
14 Feb 2023
On The Coherence of Quantitative Evaluation of Visual Explanations
On The Coherence of Quantitative Evaluation of Visual Explanations
Benjamin Vandersmissen
José Oramas
XAI
FAtt
36
3
0
14 Feb 2023
A novel approach to generate datasets with XAI ground truth to evaluate
  image models
A novel approach to generate datasets with XAI ground truth to evaluate image models
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
22
4
0
11 Feb 2023
DExT: Detector Explanation Toolkit
DExT: Detector Explanation Toolkit
Deepan Padmanabhan
Paul G. Plöger
Octavio Arriaga
Matias Valdenegro-Toro
33
2
0
21 Dec 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
26
1
0
31 Oct 2022
How explainable are adversarially-robust CNNs?
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
22
8
0
25 May 2022
Interpretability of Machine Learning Methods Applied to Neuroimaging
Interpretability of Machine Learning Methods Applied to Neuroimaging
Elina Thibeau-Sutre
S. Collin
Ninon Burgos
O. Colliot
16
4
0
14 Apr 2022
Sensing accident-prone features in urban scenes for proactive driving
  and accident prevention
Sensing accident-prone features in urban scenes for proactive driving and accident prevention
Sumit Mishra
Praveenbalaji Rajendran
L. Vecchietti
Dongsoo Har
19
13
0
25 Feb 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
24
22
0
22 Feb 2022
Turath-150K: Image Database of Arab Heritage
Turath-150K: Image Database of Arab Heritage
Dani Kiyasseh
Rasheed el-Bouri
19
0
0
01 Jan 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
114
58
0
07 Nov 2021
Explainable Deep Reinforcement Learning for Portfolio Management: An
  Empirical Approach
Explainable Deep Reinforcement Learning for Portfolio Management: An Empirical Approach
Mao Guan
Xiao-Yang Liu
AIFin
AI4TS
19
20
0
07 Nov 2021
Deep Neural Networks and Tabular Data: A Survey
Deep Neural Networks and Tabular Data: A Survey
V. Borisov
Tobias Leemann
Kathrin Seßler
Johannes Haug
Martin Pawelczyk
Gjergji Kasneci
LMTD
45
647
0
05 Oct 2021
To trust or not to trust an explanation: using LEAF to evaluate local
  linear XAI methods
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
17
68
0
01 Jun 2021
Sanity Simulations for Saliency Methods
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
38
17
0
13 May 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
21
63
0
18 Dec 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
1