Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.11371
Cited By
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
16 June 2020
Arun Das
P. Rad
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey"
50 / 225 papers shown
Title
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space
Amil Dravid
Florian Schiffers
Boqing Gong
Aggelos K. Katsaggelos
GAN
MedIm
32
9
0
11 Apr 2022
Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME
Niloofar Ranjbar
Reza Safabakhsh
FAtt
18
5
0
07 Apr 2022
Conditional Autoregressors are Interpretable Classifiers
N. Elazar
BDL
15
0
0
31 Mar 2022
A Meta Survey of Quality Evaluation Criteria in Explanation Methods
Helena Lofstrom
K. Hammar
Ulf Johansson
XAI
32
11
0
25 Mar 2022
Explainability in reinforcement learning: perspective and position
Agneza Krajna
Mario Brčič
T. Lipić
Juraj Dončević
34
27
0
22 Mar 2022
Human-Centric Artificial Intelligence Architecture for Industry 5.0 Applications
Jovze M. Rovzanec
I. Novalija
Patrik Zajec
K. Kenda
Hooman Tavakoli
...
G. Sofianidis
Spyros Theodoropoulos
Blavz Fortuna
Dunja Mladenić
John Soldatos
3DV
AI4CE
38
121
0
21 Mar 2022
A Survey on Privacy for B5G/6G: New Privacy Challenges, and Research Directions
Chamara Sandeepa
Bartlomiej Siniarski
N. Kourtellis
Shen Wang
Madhusanka Liyanage
29
21
0
08 Mar 2022
Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box
Catarina Moreira
Yu-Liang Chou
Chih-Jou Hsieh
Chun Ouyang
Joaquim A. Jorge
João Pereira
CML
34
9
0
04 Mar 2022
Label-Free Explainability for Unsupervised Models
Jonathan Crabbé
M. Schaar
FAtt
MILM
24
22
0
03 Mar 2022
Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection
Y. Kim
Huili Chen
F. Koushanfar
FedML
AAML
6
8
0
21 Feb 2022
TimeREISE: Time-series Randomized Evolving Input Sample Explanation
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
AI4TS
11
7
0
16 Feb 2022
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
22
11
0
08 Feb 2022
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
29
73
0
07 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
396
0
20 Jan 2022
Explainable Artificial Intelligence for Pharmacovigilance: What Features Are Important When Predicting Adverse Outcomes?
I. Ward
Ling Wang
Juan Lu
M. Bennamoun
Girish Dwivedi
Frank M. Sanfilippo
19
34
0
25 Dec 2021
Scope and Sense of Explainability for AI-Systems
Anastasia-Maria Leventi-Peetz
T. Östreich
Werner Lennartz
Kai Weber
14
5
0
20 Dec 2021
Applications of Explainable AI for 6G: Technical Aspects, Use Cases, and Research Challenges
Shen Wang
M. Qureshi
Luis Miralles-Pechuán
Thien Huynh-The
Thippa Reddy Gadekallu
Madhusanka Liyanage
24
22
0
09 Dec 2021
SyntEO: Synthetic Data Set Generation for Earth Observation and Deep Learning -- Demonstrated for Offshore Wind Farm Detection
Thorsten Hoeser
C. Kuenzer
37
19
0
06 Dec 2021
Understanding the Dynamics of DNNs Using Graph Modularity
Yao Lu
Wen Yang
Yunzhe Zhang
Zuohui Chen
Jinyin Chen
Qi Xuan
Zhen Wang
Xiaoniu Yang
42
9
0
24 Nov 2021
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
19
43
0
17 Nov 2021
A Survey on AI Assurance
Feras A. Batarseh
Laura J. Freeman
29
65
0
15 Nov 2021
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
16
30
0
08 Nov 2021
On the Effectiveness of Interpretable Feedforward Neural Network
Miles Q. Li
Benjamin C. M. Fung
Adel Abusitta
FaML
AI4CE
28
3
0
03 Nov 2021
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
16
37
0
28 Oct 2021
Local Explanations for Clinical Search Engine results
Edeline Contempré
Zoltán Szlávik
Majid Mohammadi
Erick Velazquez Godinez
A. T. Teije
Ilaria Tiddi
FAtt
16
1
0
19 Oct 2021
Knowledge-driven Active Learning
Gabriele Ciravegna
F. Precioso
Alessandro Betti
Kevin Mottin
Marco Gori
19
2
0
15 Oct 2021
A Framework for Learning to Request Rich and Contextually Useful Information from Humans
Khanh Nguyen
Yonatan Bisk
Hal Daumé
47
16
0
14 Oct 2021
Cartoon Explanations of Image Classifiers
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
FAtt
38
14
0
07 Oct 2021
Image recognition via Vietoris-Rips complex
Yasuhiko Asao
Jumpei Nagase
Ryotaro Sakamoto
S. Takagi
CoGe
27
0
0
06 Sep 2021
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
34
25
0
23 Aug 2021
Logic Explained Networks
Gabriele Ciravegna
Pietro Barbiero
Francesco Giannini
Marco Gori
Pietro Lió
Marco Maggini
S. Melacci
37
69
0
11 Aug 2021
Explainable AI: current status and future directions
Prashant Gohel
Priyanka Singh
M. Mohanty
XAI
98
87
0
12 Jul 2021
A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data
Raphael Mazzine
David Martens
40
33
0
09 Jul 2021
General Board Game Concepts
Éric Piette
Matthew Stephenson
Dennis J. N. J. Soemers
C. Browne
35
13
0
02 Jul 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
37
65
0
23 Jun 2021
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
Lev V. Utkin
A. Konstantinov
Kirill Vishniakov
FAtt
29
5
0
16 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lió
Marco Gori
S. Melacci
FAtt
XAI
25
78
0
12 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
23
80
0
09 Jun 2021
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
21
86
0
31 May 2021
PyTorch, Explain! A Python library for Logic Explained Networks
Pietro Barbiero
Gabriele Ciravegna
Dobrik Georgiev
Francesco Giannini
FAtt
XAI
23
9
0
25 May 2021
Explainable Activity Recognition for Smart Home Systems
Devleena Das
Yasutaka Nishimura
R. Vivek
Naoto Takeda
Sean T. Fish
Thomas Ploetz
Sonia Chernova
22
40
0
20 May 2021
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAI
FAtt
16
15
0
20 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
183
0
15 May 2021
Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention
Nihar Bendre
K. Desai
Peyman Najafirad
CoGe
20
6
0
15 May 2021
Zero-bias Deep Learning Enabled Quick and Reliable Abnormality Detection in IoT
Yongxin Liu
Jian Wang
Jianqiang Li
Shuteng Niu
Haoze Song
15
2
0
08 Apr 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
214
0
09 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
47
176
0
07 Mar 2021
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
AI-Augmented Behavior Analysis for Children with Developmental Disabilities: Building Towards Precision Treatment
Shadi Ghafghazi
Amarie Carnett
Leslie C. Neely
Arun Das
P. Rad
8
15
0
21 Feb 2021
Previous
1
2
3
4
5
Next