ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10045
  4. Cited By
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
v1v2 (latest)

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

22 October 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
A. Barbado
S. García
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI"

39 / 1,389 papers shown
Title
An Information-theoretic Visual Analysis Framework for Convolutional
  Neural Networks
An Information-theoretic Visual Analysis Framework for Convolutional Neural Networks
Jingyi Shen
Han-Wei Shen
FAttHAI
15
1
0
02 May 2020
The Grammar of Interactive Explanatory Model Analysis
The Grammar of Interactive Explanatory Model Analysis
Hubert Baniecki
Dariusz Parzych
P. Biecek
75
46
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
118
382
0
30 Apr 2020
Learning a Formula of Interpretability to Learn Interpretable Formulas
Learning a Formula of Interpretability to Learn Interpretable Formulas
M. Virgolin
A. D. Lorenzo
Eric Medvet
Francesca Randone
70
35
0
23 Apr 2020
Deep Echo State Networks for Short-Term Traffic Forecasting: Performance
  Comparison and Statistical Assessment
Deep Echo State Networks for Short-Term Traffic Forecasting: Performance Comparison and Statistical Assessment
Javier Del Ser
I. Laña
Eric L. Manibardo
I. Oregi
E. Osaba
J. Lobo
Miren Nekane Bilbao
E. Vlahogianni
55
16
0
17 Apr 2020
Review of Artificial Intelligence Techniques in Imaging Data
  Acquisition, Segmentation and Diagnosis for COVID-19
Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19
F. Shi
Jun Wang
Jun Shi
Zi-xiang Wu
Qian Wang
Zhenyu Tang
Kelei He
Yinghuan Shi
Dinggang Shen
108
1,060
0
06 Apr 2020
R3: A Reading Comprehension Benchmark Requiring Reasoning Processes
R3: A Reading Comprehension Benchmark Requiring Reasoning Processes
Ran Wang
Kun Tao
Dingjie Song
Zhilong Zhang
Xiao Ma
Xiáo Su
Xinyu Dai
64
3
0
02 Apr 2020
Plausible Counterfactuals: Auditing Deep Learning Classifiers with
  Realistic Adversarial Examples
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples
Alejandro Barredo Arrieta
Javier Del Ser
AAML
118
24
0
25 Mar 2020
Learn to Forget: Machine Unlearning via Neuron Masking
Learn to Forget: Machine Unlearning via Neuron Masking
Yang Liu
Zhuo Ma
Ximeng Liu
Jian Liu
Zhongyuan Jiang
Jianfeng Ma
Philip Yu
K. Ren
MU
88
67
0
24 Mar 2020
SurvLIME: A method for explaining machine learning survival models
SurvLIME: A method for explaining machine learning survival models
M. Kovalev
Lev V. Utkin
E. Kasimov
292
91
0
18 Mar 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MAVLM
393
1,500
0
18 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
143
83
0
17 Mar 2020
Towards Transparent Robotic Planning via Contrastive Explanations
Towards Transparent Robotic Planning via Contrastive Explanations
Shenghui Chen
Kayla Boggess
Lu Feng
53
9
0
16 Mar 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAIAAML
97
157
0
16 Mar 2020
Universal Function Approximation on Graphs
Universal Function Approximation on Graphs
Rickard Brüel-Gabrielsson
61
6
0
14 Mar 2020
Explainable Agents Through Social Cues: A Review
Explainable Agents Through Social Cues: A Review
Sebastian Wallkötter
Silvia Tulli
Ginevra Castellano
Ana Paiva
Mohamed Chetouani
60
13
0
11 Mar 2020
Vector symbolic architectures for context-free grammars
Vector symbolic architectures for context-free grammars
P. B. Graben
Markus Huber
Werner Meyer
Ronald Römer
M. Wolff
66
9
0
11 Mar 2020
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
  Assurance Methodology
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance Methodology
Stefan Studer
T. Bui
C. Drescher
A. Hanuschkin
Ludwig Winkler
S. Peters
Klaus-Robert Muller
133
180
0
11 Mar 2020
Towards Interpretable ANNs: An Exact Transformation to Multi-Class
  Multivariate Decision Trees
Towards Interpretable ANNs: An Exact Transformation to Multi-Class Multivariate Decision Trees
Duy T. Nguyen
Kathryn E. Kasmarik
H. Abbass
30
8
0
10 Mar 2020
Information cartography in association rule mining
Information cartography in association rule mining
Iztok Fister
Iztok Fister
36
13
0
29 Feb 2020
AI safety: state of the field through quantitative lens
AI safety: state of the field through quantitative lens
Mislav Juric
A. Sandic
Mario Brčič
93
24
0
12 Feb 2020
From Data to Actions in Intelligent Transportation Systems: a
  Prescription of Functional Requirements for Model Actionability
From Data to Actions in Intelligent Transportation Systems: a Prescription of Functional Requirements for Model Actionability
I. Laña
J. S. Medina
E. Vlahogianni
Javier Del Ser
103
52
0
06 Feb 2020
LUNAR: Cellular Automata for Drifting Data Streams
LUNAR: Cellular Automata for Drifting Data Streams
J. Lobo
Javier Del Ser
Francisco Herrera
AI4TS
38
4
0
06 Feb 2020
MNIST-NET10: A heterogeneous deep networks fusion based on the degree of
  certainty to reach 0.1 error rate. Ensembles overview and proposal
MNIST-NET10: A heterogeneous deep networks fusion based on the degree of certainty to reach 0.1 error rate. Ensembles overview and proposal
Siham Tabik
R. F. Alvear-Sandoval
María M. Ruiz
J. Sancho-Gómez
A. Figueiras-Vidal
Francisco Herrera
118
34
0
30 Jan 2020
An interpretable semi-supervised classifier using two different
  strategies for amended self-labeling
An interpretable semi-supervised classifier using two different strategies for amended self-labeling
Isel Grau
Dipankar Sengupta
M. Lorenzo
A. Nowé
SSL
93
4
0
26 Jan 2020
Explainable Artificial Intelligence and Machine Learning: A reality
  rooted perspective
Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective
F. Emmert-Streib
O. Yli-Harja
M. Dehmer
48
85
0
26 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
94
317
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
142
733
0
08 Jan 2020
Analysing Deep Reinforcement Learning Agents Trained with Domain
  Randomisation
Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Tianhong Dai
Kai Arulkumaran
Tamara Gerbert
Samyakh Tukra
Feryal M. P. Behbahani
Anil Anthony Bharath
87
28
0
18 Dec 2019
Understanding complex predictive models with Ghost Variables
Understanding complex predictive models with Ghost Variables
Pedro Delicado
D. Peña
FAtt
44
5
0
13 Dec 2019
Rule Extraction in Unsupervised Anomaly Detection for Model
  Explainability: Application to OneClass SVM
Rule Extraction in Unsupervised Anomaly Detection for Model Explainability: Application to OneClass SVM
A. Barbado
Óscar Corcho
Richard Benjamins
59
54
0
21 Nov 2019
Explainable Artificial Intelligence (XAI) for 6G: Improving Trust
  between Human and Machine
Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine
Weisi Guo
69
40
0
11 Nov 2019
Learning Fair Rule Lists
Learning Fair Rule Lists
Ulrich Aïvodji
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
64
11
0
09 Sep 2019
Satellite-Net: Automatic Extraction of Land Cover Indicators from
  Satellite Imagery by Deep Learning
Satellite-Net: Automatic Extraction of Land Cover Indicators from Satellite Imagery by Deep Learning
Eleonora Bernasconi
Francesco Pugliese
Diego Zardetto
M. Scannapieco
29
3
0
22 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical
  XAI
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
170
1,464
0
17 Jul 2019
A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback
  from Domain Experts
A Multi-Objective Anytime Rule Mining System to Ease Iterative Feedback from Domain Experts
T. Baum
Steffen Herbold
K. Schneider
23
4
0
23 Dec 2018
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
122
102
0
28 Nov 2018
On a Sparse Shortcut Topology of Artificial Neural Networks
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
139
22
0
22 Nov 2018
XAI Beyond Classification: Interpretable Neural Clustering
XAI Beyond Classification: Interpretable Neural Clustering
Xi Peng
Yunfan Li
Ivor W. Tsang
Erik Cambria
Jiancheng Lv
Qiufeng Wang
75
75
0
22 Aug 2018
Previous
123...262728