Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.02899
Cited By
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
7 September 2020
Erico Tjoa
Cuntai Guan
XAI
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset"
12 / 12 papers shown
Title
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Rick Wilming
Artur Dox
Hjalmar Schulz
Marta Oliveira
Benedict Clark
Stefan Haufe
35
2
0
17 Jun 2024
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution
Niklas Koenen
Marvin N. Wright
FAtt
42
5
0
17 Apr 2024
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
42
2
0
06 Apr 2024
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
50
1
0
03 Apr 2024
Forward Learning for Gradient-based Black-box Saliency Map Generation
Zeliang Zhang
Mingqian Feng
Jinyang Jiang
Rongyi Zhu
Yijie Peng
Chenliang Xu
FAtt
34
2
0
22 Mar 2024
A Pseudo-Boolean Polynomials Approach for Image Edge Detection
T. M. Chikake
B. Goldengorin
8
1
0
29 Aug 2023
SHAMSUL: Systematic Holistic Analysis to investigate Medical Significance Utilizing Local interpretability methods in deep learning for chest radiography pathology prediction
Mahbub Ul Alam
Jaakko Hollmén
Jón R. Baldvinsson
R. Rahmani
FAtt
31
1
0
16 Jul 2023
Benchmark data to study the influence of pre-training on explanation performance in MR image classification
Marta Oliveira
Rick Wilming
Benedict Clark
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
16
1
0
21 Jun 2023
Validation of a Hospital Digital Twin with Machine Learning
M. Ahmad
V. Chickarmane
Farinaz Sabz Ali Pour
Nima Shariari
Taposh Dutta Roy
AI4CE
8
1
0
07 Mar 2023
Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI
Erico Tjoa
Hong Jing Khok
Tushar Chouhan
G. Cuntai
FAtt
25
4
0
30 Dec 2021
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
27
5
0
09 Dec 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
24
303
0
01 Nov 2021
1