ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10045
  4. Cited By
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
v1v2 (latest)

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

22 October 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
A. Barbado
S. García
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI"

50 / 1,389 papers shown
Title
SCARI: Separate and Conquer Algorithm for Action Rules and
  Recommendations Induction
SCARI: Separate and Conquer Algorithm for Action Rules and Recommendations Induction
Marek Sikora
Pawel Matyszok
Lukasz Wróbel
20
12
0
09 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAttAI4TS
104
81
0
09 Jun 2021
Exploiting auto-encoders and segmentation methods for middle-level
  explanations of image classification systems
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
80
18
0
09 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
91
32
0
09 Jun 2021
Explainable AI for medical imaging: Explaining pneumothorax diagnoses
  with Bayesian Teaching
Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching
Tomas Folke
Scott Cheng-Hsin Yang
S. Anderson
Patrick Shafto
51
19
0
08 Jun 2021
Can a single neuron learn predictive uncertainty?
Can a single neuron learn predictive uncertainty?
Edgardo Solano-Carrillo
UQCV
74
1
0
07 Jun 2021
Data-Driven Design-by-Analogy: State of the Art and Future Directions
Data-Driven Design-by-Analogy: State of the Art and Future Directions
Shuo Jiang
Jie Hu
Kristin L. Wood
Jianxi Luo
78
54
0
03 Jun 2021
Is Sparse Attention more Interpretable?
Is Sparse Attention more Interpretable?
Clara Meister
Stefan Lazov
Isabelle Augenstein
Ryan Cotterell
MILM
64
45
0
02 Jun 2021
To trust or not to trust an explanation: using LEAF to evaluate local
  linear XAI methods
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
81
68
0
01 Jun 2021
Explainability via Interactivity? Supporting Nonexperts' Sensemaking of
  Pretrained CNN by Interacting with Their Daily Surroundings
Explainability via Interactivity? Supporting Nonexperts' Sensemaking of Pretrained CNN by Interacting with Their Daily Surroundings
Chao Wang
Pengcheng An
HAI
60
7
0
31 May 2021
Know Your Model (KYM): Increasing Trust in AI and Machine Learning
Know Your Model (KYM): Increasing Trust in AI and Machine Learning
Mary Roszel
Robert Norvill
Jean Hilger
R. State
46
4
0
31 May 2021
Do not explain without context: addressing the blind spot of model
  explanations
Do not explain without context: addressing the blind spot of model explanations
Katarzyna Wo'znica
Katarzyna Pkekala
Hubert Baniecki
Wojciech Kretowicz
El.zbieta Sienkiewicz
P. Biecek
59
1
0
28 May 2021
Fooling Partial Dependence via Data Poisoning
Fooling Partial Dependence via Data Poisoning
Hubert Baniecki
Wojciech Kretowicz
P. Biecek
AAML
83
23
0
26 May 2021
Development and evaluation of an Explainable Prediction Model for
  Chronic Kidney Disease Patients based on Ensemble Trees
Development and evaluation of an Explainable Prediction Model for Chronic Kidney Disease Patients based on Ensemble Trees
Pedro A. Moreno-Sánchez
13
40
0
21 May 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
82
28
0
21 May 2021
Evaluating the Correctness of Explainable AI Algorithms for
  Classification
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAIFAtt
46
15
0
20 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
138
142
0
17 May 2021
How to Explain Neural Networks: an Approximation Perspective
How to Explain Neural Networks: an Approximation Perspective
Hangcheng Dong
Bingguo Liu
Fengdong Chen
Dong Ye
Guodong Liu
FAtt
46
1
0
17 May 2021
Designer-User Communication for XAI: An epistemological approach to
  discuss XAI design
Designer-User Communication for XAI: An epistemological approach to discuss XAI design
J. Ferreira
Mateus de Souza Monteiro
27
4
0
17 May 2021
Abstraction, Validation, and Generalization for Explainable Artificial
  Intelligence
Abstraction, Validation, and Generalization for Explainable Artificial Intelligence
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
74
5
0
16 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
153
198
0
15 May 2021
Verification of Size Invariance in DNN Activations using Concept
  Embeddings
Verification of Size Invariance in DNN Activations using Concept Embeddings
Gesina Schwalbe
3DPC
42
8
0
14 May 2021
XAI Handbook: Towards a Unified Framework for Explainable AI
XAI Handbook: Towards a Unified Framework for Explainable AI
Sebastián M. Palacio
Adriano Lucieri
Mohsin Munir
Jörn Hees
Sheraz Ahmed
Andreas Dengel
56
32
0
14 May 2021
Physical Artificial Intelligence: The Concept Expansion of
  Next-Generation Artificial Intelligence
Physical Artificial Intelligence: The Concept Expansion of Next-Generation Artificial Intelligence
Yingbo Li
Yucong Duan
Anamaria-Beatrice Spulber
Haoyang Che
Z. Maamar
Zhao Li
Chen-Ying Yang
Yuxiao Lei
18
6
0
13 May 2021
Bias, Fairness, and Accountability with AI and ML Algorithms
Bias, Fairness, and Accountability with AI and ML Algorithms
Neng-Zhi Zhou
Zach Zhang
V. Nair
Harsh Singhal
Jie Chen
Agus Sudjianto
FaML
123
9
0
13 May 2021
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
Yiming Cui
Ting Liu
Wanxiang Che
Zhigang Chen
Shijin Wang
ELMLRM
36
11
0
10 May 2021
Towards Explainable, Privacy-Preserved Human-Motion Affect Recognition
Towards Explainable, Privacy-Preserved Human-Motion Affect Recognition
Matthew Malek-Podjaski
Fani Deligianni
CVBM
57
7
0
09 May 2021
e-ViL: A Dataset and Benchmark for Natural Language Explanations in
  Vision-Language Tasks
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Maxime Kayser
Oana-Maria Camburu
Leonard Salewski
Cornelius Emde
Virginie Do
Zeynep Akata
Thomas Lukasiewicz
VLM
112
101
0
08 May 2021
Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles
Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles
John Törnblom
Simin Nadjm-Tehrani
52
4
0
06 May 2021
Pervasive AI for IoT applications: A Survey on Resource-efficient
  Distributed Artificial Intelligence
Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence
Emna Baccour
N. Mhaisen
A. Abdellatif
A. Erbad
Amr M. Mohamed
Mounir Hamdi
Mohsen Guizani
98
94
0
04 May 2021
Towards Accountability in the Use of Artificial Intelligence for Public
  Administrations
Towards Accountability in the Use of Artificial Intelligence for Public Administrations
M. Loi
M. Spielkamp
120
50
0
04 May 2021
A Computational Framework for Modeling Complex Sensor Network Data Using
  Graph Signal Processing and Graph Neural Networks in Structural Health
  Monitoring
A Computational Framework for Modeling Complex Sensor Network Data Using Graph Signal Processing and Graph Neural Networks in Structural Health Monitoring
Stefan Bloemheuvel
Jurgen van den Hoogen
Martin Atzmueller
AI4CE
66
22
0
01 May 2021
Finding Good Proofs for Description Logic Entailments Using Recursive
  Quality Measures (Extended Technical Report)
Finding Good Proofs for Description Logic Entailments Using Recursive Quality Measures (Extended Technical Report)
Christian Alrabbaa
F. Baader
Stefan Borgwardt
Patrick Koopmann
Alisa Kovtunova
34
23
0
27 Apr 2021
End-to-end grasping policies for human-in-the-loop robots via deep
  reinforcement learning
End-to-end grasping policies for human-in-the-loop robots via deep reinforcement learning
M. Sharif
Deniz Erdogmus
Chris Amato
T. Padır
36
2
0
26 Apr 2021
TrustyAI Explainability Toolkit
TrustyAI Explainability Toolkit
Rob Geada
Tommaso Teofili
Rui Vieira
Rebecca Whitworth
Daniele Zonca
59
2
0
26 Apr 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
100
20
0
26 Apr 2021
Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
Qinghao Ye
Jun Xia
Guang Yang
95
60
0
25 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
136
78
0
24 Apr 2021
Intensional Artificial Intelligence: From Symbol Emergence to
  Explainable and Empathetic AI
Intensional Artificial Intelligence: From Symbol Emergence to Explainable and Empathetic AI
Michael Timothy Bennett
Y. Maruyama
62
3
0
23 Apr 2021
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Tabea E. Rober
Adia C. Lumadjeng
M. Akyuz
cS. .Ilker Birbil
126
2
0
21 Apr 2021
Open Challenges on Generating Referring Expressions for Human-Robot
  Interaction
Open Challenges on Generating Referring Expressions for Human-Robot Interaction
Fethiye Irmak Dogan
Iolanda Leite
91
4
0
19 Apr 2021
DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations
  With Distribution-Aware Autoencoder Loss
DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations With Distribution-Aware Autoencoder Loss
Jokin Labaien
E. Zugasti
Xabier De Carlos
CML
59
4
0
19 Apr 2021
SurvNAM: The machine learning survival model explanation
SurvNAM: The machine learning survival model explanation
Lev V. Utkin
Egor D. Satyukov
A. Konstantinov
AAMLFAtt
93
30
0
18 Apr 2021
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
Dieter Brughmans
Pieter Leyman
David Martens
83
65
0
15 Apr 2021
Evaluating Standard Feature Sets Towards Increased Generalisability and
  Explainability of ML-based Network Intrusion Detection
Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-based Network Intrusion Detection
Mohanad Sarhan
S. Layeghy
Marius Portmann
67
69
0
15 Apr 2021
Enhancing Deep Neural Network Saliency Visualizations with Gradual
  Extrapolation
Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation
Tomasz Szandała
FAtt
27
4
0
11 Apr 2021
Deep Learning and Traffic Classification: Lessons learned from a
  commercial-grade dataset with hundreds of encrypted and zero-day applications
Deep Learning and Traffic Classification: Lessons learned from a commercial-grade dataset with hundreds of encrypted and zero-day applications
Lixuan Yang
A. Finamore
Feng Jun
Dario Rossi
27
50
0
07 Apr 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word
  Representations
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations
Archit Rathore
Sunipa Dev
J. M. Phillips
Vivek Srikumar
Yan Zheng
Chin-Chia Michael Yeh
Junpeng Wang
Wei Zhang
Bei Wang
80
11
0
06 Apr 2021
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Thomas Rojat
Raphael Puget
David Filliat
Javier Del Ser
R. Gelin
Natalia Díaz Rodríguez
XAIAI4TS
99
135
0
02 Apr 2021
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on
  Probing Attacks to an Institutional Network
Anomaly-Based Intrusion Detection by Machine Learning: A Case Study on Probing Attacks to an Institutional Network
E. Tufan
C. Tezcan
Cengiz Acartürk
51
30
0
31 Mar 2021
Previous
123...232425262728
Next