ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.10639
  4. Cited By
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors

What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors

22 September 2020
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
    XAI
ArXivPDFHTML

Papers citing "What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors"

33 / 33 papers shown
Title
Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics
Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics
Mohammed Alquliti
Erisa Karafili
BooJoong Kang
XAI
29
0
0
12 May 2025
Unifying Perplexing Behaviors in Modified BP Attributions through Alignment Perspective
Guanhua Zheng
Jitao Sang
Changsheng Xu
AAML
FAtt
67
0
0
14 Mar 2025
Knowledge-Augmented Explainable and Interpretable Learning for Anomaly Detection and Diagnosis
Martin Atzmueller
Tim Bohne
Patricia Windler
85
0
0
28 Nov 2024
Explainable Artificial Intelligence for Medical Applications: A Review
Explainable Artificial Intelligence for Medical Applications: A Review
Qiyang Sun
Alican Akman
Björn Schuller
81
6
0
15 Nov 2024
IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial
  Intelligence Evaluation in Histopathology
IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial Intelligence Evaluation in Histopathology
Pardis Afshar
Sajjad Hashembeiki
Pouya Khani
Emad Fatemizadeh
M. Rohban
24
4
0
29 Aug 2024
Revocable Backdoor for Deep Model Trading
Revocable Backdoor for Deep Model Trading
Yiran Xu
Nan Zhong
Zhenxing Qian
Xinpeng Zhang
AAML
35
0
0
01 Aug 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
42
0
0
17 Jun 2024
Applications of Explainable artificial intelligence in Earth system
  science
Applications of Explainable artificial intelligence in Earth system science
Feini Huang
Shijie Jiang
Lu Li
Yongkun Zhang
Ye Zhang
Ruqing Zhang
Qingliang Li
Danxi Li
Wei Shangguan
Yongjiu Dai
30
2
0
12 Jun 2024
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of
  Attribution Methods
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attribution Methods
Peiyu Yang
Naveed Akhtar
Jiantong Jiang
Ajmal Saeed Mian
XAI
32
2
0
02 May 2024
Forward Learning for Gradient-based Black-box Saliency Map Generation
Forward Learning for Gradient-based Black-box Saliency Map Generation
Zeliang Zhang
Mingqian Feng
Jinyang Jiang
Rongyi Zhu
Yijie Peng
Chenliang Xu
FAtt
32
2
0
22 Mar 2024
Security and Privacy Challenges of Large Language Models: A Survey
Security and Privacy Challenges of Large Language Models: A Survey
B. Das
M. H. Amini
Yanzhao Wu
PILM
ELM
19
103
0
30 Jan 2024
Manipulating Trajectory Prediction with Backdoors
Manipulating Trajectory Prediction with Backdoors
Kaouther Messaoud
Kathrin Grosse
Mickaël Chen
Matthieu Cord
Patrick Pérez
Alexandre Alahi
AAML
LLMSV
29
0
0
21 Dec 2023
Ethical Framework for Harnessing the Power of AI in Healthcare and
  Beyond
Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond
Sidra Nasir
Rizwan Ahmed Khan
Samita Bai
17
29
0
31 Aug 2023
What's meant by explainable model: A Scoping Review
What's meant by explainable model: A Scoping Review
Mallika Mainali
Rosina O. Weber
XAI
34
0
0
18 Jul 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
Backdoor Attack with Sparse and Invisible Trigger
Backdoor Attack with Sparse and Invisible Trigger
Yinghua Gao
Yiming Li
Xueluan Gong
Zhifeng Li
Shutao Xia
Qianqian Wang
AAML
13
20
0
11 May 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
40
3
0
07 Feb 2023
Rickrolling the Artist: Injecting Backdoors into Text Encoders for
  Text-to-Image Synthesis
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
Towards A Holistic View of Bias in Machine Learning: Bridging
  Algorithmic Fairness and Imbalanced Learning
Towards A Holistic View of Bias in Machine Learning: Bridging Algorithmic Fairness and Imbalanced Learning
Damien Dablain
Bartosz Krawczyk
Nitesh V. Chawla
FaML
18
8
0
13 Jul 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
39
7
0
27 Jun 2022
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
Jie Wang
Ghulam Mubashar Hassan
Naveed Akhtar
AAML
26
24
0
15 Feb 2022
Explainable Artificial Intelligence Methods in Combating Pandemics: A
  Systematic Review
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
F. Giuste
Wenqi Shi
Yuanda Zhu
Tarun Naren
Monica Isgut
Ying Sha
L. Tong
Mitali S. Gupte
May D. Wang
21
73
0
23 Dec 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAtt
CoGe
41
1
0
16 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image
  Classification Models: An Empirical Study
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
14
10
0
02 Sep 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
24
3
0
14 Aug 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
18
317
0
19 Mar 2021
Explainable Artificial Intelligence (XAI): An Engineering Perspective
Explainable Artificial Intelligence (XAI): An Engineering Perspective
F. Hussain
R. Hussain
E. Hossain
XAI
23
26
0
10 Jan 2021
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Xingyu Zhao
Wei Huang
Xiaowei Huang
Valentin Robu
David Flynn
FAtt
26
92
0
05 Dec 2020
Trustworthy AI
Trustworthy AI
Richa Singh
Mayank Vatsa
N. Ratha
15
4
0
02 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
36
7
0
23 Oct 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
586
0
17 Jul 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1