ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10045
  4. Cited By
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

22 October 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
A. Barbado
S. García
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
    XAI
ArXivPDFHTML

Papers citing "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI"

50 / 430 papers shown
Title
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
27
85
0
28 Jul 2021
A Review of Some Techniques for Inclusion of Domain-Knowledge into Deep
  Neural Networks
A Review of Some Techniques for Inclusion of Domain-Knowledge into Deep Neural Networks
T. Dash
Sharad Chitlangia
Aditya Ahuja
A. Srinivasan
24
128
0
21 Jul 2021
Leveraging Explainability for Comprehending Referring Expressions in the
  Real World
Leveraging Explainability for Comprehending Referring Expressions in the Real World
Fethiye Irmak Dogan
G. I. Melsión
Iolanda Leite
37
8
0
12 Jul 2021
Pairing Conceptual Modeling with Machine Learning
Pairing Conceptual Modeling with Machine Learning
W. Maass
V. Storey
HAI
19
33
0
27 Jun 2021
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
27
64
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
26
65
0
23 Jun 2021
Generating Contrastive Explanations for Inductive Logic Programming
  Based on a Near Miss Approach
Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach
Johannes Rabold
M. Siebers
Ute Schmid
18
14
0
15 Jun 2021
Characterizing the risk of fairwashing
Characterizing the risk of fairwashing
Ulrich Aivodji
Hiromi Arai
Sébastien Gambs
Satoshi Hara
18
27
0
14 Jun 2021
Exploring deterministic frequency deviations with explainable AI
Exploring deterministic frequency deviations with explainable AI
Johannes Kruse
B. Schäfer
D. Witthaut
11
15
0
14 Jun 2021
What Can Knowledge Bring to Machine Learning? -- A Survey of Low-shot
  Learning for Structured Data
What Can Knowledge Bring to Machine Learning? -- A Survey of Low-shot Learning for Structured Data
Yang Hu
Adriane P. Chapman
Guihua Wen
Dame Wendy Hall
34
24
0
11 Jun 2021
Evaluating the Correctness of Explainable AI Algorithms for
  Classification
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAI
FAtt
11
15
0
20 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
21
137
0
17 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
21
184
0
15 May 2021
Bias, Fairness, and Accountability with AI and ML Algorithms
Bias, Fairness, and Accountability with AI and ML Algorithms
Neng-Zhi Zhou
Zach Zhang
V. Nair
Harsh Singhal
Jie Chen
Agus Sudjianto
FaML
16
8
0
13 May 2021
e-ViL: A Dataset and Benchmark for Natural Language Explanations in
  Vision-Language Tasks
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Maxime Kayser
Oana-Maria Camburu
Leonard Salewski
Cornelius Emde
Virginie Do
Zeynep Akata
Thomas Lukasiewicz
VLM
21
100
0
08 May 2021
Pervasive AI for IoT applications: A Survey on Resource-efficient
  Distributed Artificial Intelligence
Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence
Emna Baccour
N. Mhaisen
A. Abdellatif
A. Erbad
Amr M. Mohamed
Mounir Hamdi
Mohsen Guizani
26
86
0
04 May 2021
Finding Good Proofs for Description Logic Entailments Using Recursive
  Quality Measures (Extended Technical Report)
Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures (Extended Technical Report)
Christian Alrabbaa
F. Baader
Stefan Borgwardt
Patrick Koopmann
Alisa Kovtunova
17
23
0
27 Apr 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
29
20
0
26 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
S. Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
45
77
0
24 Apr 2021
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Tabea E. Rober
Adia C. Lumadjeng
M. Akyuz
cS. .Ilker Birbil
14
2
0
21 Apr 2021
Evaluating Standard Feature Sets Towards Increased Generalisability and
  Explainability of ML-based Network Intrusion Detection
Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-based Network Intrusion Detection
Mohanad Sarhan
S. Layeghy
Marius Portmann
24
60
0
15 Apr 2021
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on
  Probing Attacks to an Institutional Network
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on Probing Attacks to an Institutional Network
E. Tufan
C. Tezcan
Cengiz Acartürk
11
29
0
31 Mar 2021
Fairness and Transparency in Recommendation: The Users' Perspective
Fairness and Transparency in Recommendation: The Users' Perspective
Nasim Sonboli
Jessie J. Smith
Florencia Cabral Berenfus
Robin Burke
Casey Fiesler
FaML
13
65
0
16 Mar 2021
A conditional, a fuzzy and a probabilistic interpretation of
  self-organising maps
A conditional, a fuzzy and a probabilistic interpretation of self-organising maps
Laura Giordano
Valentina Gliozzi
Daniele Theseider Dupré
AI4CE
32
23
0
11 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
11
212
0
09 Mar 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
A Comprehensive Study on Face Recognition Biases Beyond Demographics
A Comprehensive Study on Face Recognition Biases Beyond Demographics
Philipp Terhörst
J. Kolf
Marco Huber
Florian Kirchbuchner
Naser Damer
Aythami Morales
Julian Fierrez
Arjan Kuijper
21
115
0
02 Mar 2021
Explainable AI in Credit Risk Management
Explainable AI in Credit Risk Management
Branka Hadji Misheva
Jörg Osterrieder
Ali Hirsa
O. Kulkarni
Stephen Lin
24
64
0
01 Mar 2021
Towards Personalized Federated Learning
Towards Personalized Federated Learning
A. Tan
Han Yu
Li-zhen Cui
Qiang Yang
FedML
AI4CE
203
840
0
01 Mar 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
53
415
0
15 Feb 2021
Advances in Electron Microscopy with Deep Learning
Advances in Electron Microscopy with Deep Learning
Jeffrey M. Ede
27
2
0
04 Jan 2021
Weighted defeasible knowledge bases and a multipreference semantics for
  a deep neural network model
Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model
Laura Giordano
Daniele Theseider Dupré
28
35
0
24 Dec 2020
Towards open and expandable cognitive AI architectures for large-scale
  multi-agent human-robot collaborative learning
Towards open and expandable cognitive AI architectures for large-scale multi-agent human-robot collaborative learning
Georgios Th. Papadopoulos
M. Antona
C. Stephanidis
AI4CE
17
24
0
15 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
19
142
0
14 Dec 2020
Evolutionary learning of interpretable decision trees
Evolutionary learning of interpretable decision trees
Leonardo Lucio Custode
Giovanni Iacca
OffRL
19
40
0
14 Dec 2020
Explanation from Specification
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
16
0
0
13 Dec 2020
Physics-Guided Spoof Trace Disentanglement for Generic Face
  Anti-Spoofing
Physics-Guided Spoof Trace Disentanglement for Generic Face Anti-Spoofing
Yaojie Liu
Xiaoming Liu
AAML
24
10
0
09 Dec 2020
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
31
38
0
03 Dec 2020
Deep Learning for Road Traffic Forecasting: Does it Make a Difference?
Deep Learning for Road Traffic Forecasting: Does it Make a Difference?
Eric L. Manibardo
I. Laña
Javier Del Ser
AI4TS
26
67
0
02 Dec 2020
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and
  Explainable Automatic Recruitment
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment
Alfonso Ortega
Julian Fierrez
Aythami Morales
Zilong Wang
Tony Ribeiro
12
13
0
01 Dec 2020
Quantifying Explainers of Graph Neural Networks in Computational
  Pathology
Quantifying Explainers of Graph Neural Networks in Computational Pathology
Guillaume Jaume
Pushpak Pati
Behzad Bozorgtabar
Antonio Foncubierta-Rodríguez
Florinda Feroce
A. Anniciello
T. Rau
Jean-Philippe Thiran
M. Gabrani
O. Goksel
FAtt
13
76
0
25 Nov 2020
Interpretable collaborative data analysis on distributed data
Interpretable collaborative data analysis on distributed data
A. Imakura
Hiroaki Inaba
Yukihiko Okada
Tetsuya Sakurai
FedML
6
26
0
09 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for
  Interpretable Image Recognition
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
14
65
0
05 Nov 2020
Abduction and Argumentation for Explainable Machine Learning: A Position
  Survey
Abduction and Argumentation for Explainable Machine Learning: A Position Survey
A. Kakas
Loizos Michael
9
11
0
24 Oct 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
20
26
0
04 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
24
93
0
22 Sep 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
24
79
0
17 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aivodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
25
51
0
03 Sep 2020
Face Image Quality Assessment: A Literature Survey
Face Image Quality Assessment: A Literature Survey
Torsten Schlett
Christian Rathgeb
O. Henniger
Javier Galbally
Julian Fierrez
Christoph Busch
CVBM
11
128
0
02 Sep 2020
Fuzzy Jaccard Index: A robust comparison of ordered lists
Fuzzy Jaccard Index: A robust comparison of ordered lists
Matej Petković
Blaž Škrlj
D. Kocev
Nikola Simidjievski
23
13
0
05 Aug 2020
Previous
123456789
Next