ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.02899
  4. Cited By
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset

Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset

7 September 2020
Erico Tjoa
Cuntai Guan
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset"

12 / 12 papers shown
Title
GECOBench: A Gender-Controlled Text Dataset and Benchmark for
  Quantifying Biases in Explanations
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Rick Wilming
Artur Dox
Hjalmar Schulz
Marta Oliveira
Benedict Clark
Stefan Haufe
35
2
0
17 Jun 2024
Toward Understanding the Disagreement Problem in Neural Network Feature
  Attribution
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution
Niklas Koenen
Marvin N. Wright
FAtt
42
5
0
17 Apr 2024
Structured Gradient-based Interpretations via Norm-Regularized
  Adversarial Training
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
42
2
0
06 Apr 2024
Exploring the Trade-off Between Model Performance and Explanation
  Plausibility of Text Classifiers Using Human Rationales
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
50
1
0
03 Apr 2024
Forward Learning for Gradient-based Black-box Saliency Map Generation
Forward Learning for Gradient-based Black-box Saliency Map Generation
Zeliang Zhang
Mingqian Feng
Jinyang Jiang
Rongyi Zhu
Yijie Peng
Chenliang Xu
FAtt
34
2
0
22 Mar 2024
A Pseudo-Boolean Polynomials Approach for Image Edge Detection
A Pseudo-Boolean Polynomials Approach for Image Edge Detection
T. M. Chikake
B. Goldengorin
8
1
0
29 Aug 2023
SHAMSUL: Systematic Holistic Analysis to investigate Medical
  Significance Utilizing Local interpretability methods in deep learning for
  chest radiography pathology prediction
SHAMSUL: Systematic Holistic Analysis to investigate Medical Significance Utilizing Local interpretability methods in deep learning for chest radiography pathology prediction
Mahbub Ul Alam
Jaakko Hollmén
Jón R. Baldvinsson
R. Rahmani
FAtt
31
1
0
16 Jul 2023
Benchmark data to study the influence of pre-training on explanation
  performance in MR image classification
Benchmark data to study the influence of pre-training on explanation performance in MR image classification
Marta Oliveira
Rick Wilming
Benedict Clark
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
16
1
0
21 Jun 2023
Validation of a Hospital Digital Twin with Machine Learning
Validation of a Hospital Digital Twin with Machine Learning
M. Ahmad
V. Chickarmane
Farinaz Sabz Ali Pour
Nima Shariari
Taposh Dutta Roy
AI4CE
8
1
0
07 Mar 2023
Improving Deep Neural Network Classification Confidence using
  Heatmap-based eXplainable AI
Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI
Erico Tjoa
Hong Jing Khok
Tushar Chouhan
G. Cuntai
FAtt
25
4
0
30 Dec 2021
Evaluating saliency methods on artificial data with different background
  types
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
27
5
0
09 Dec 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
24
303
0
01 Nov 2021
1