ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10045
  4. Cited By
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
v1v2 (latest)

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

22 October 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
A. Barbado
S. García
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI"

50 / 1,389 papers shown
Title
Towards Comparative Physical Interpretation of Spatial Variability Aware
  Neural Networks: A Summary of Results
Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results
Jayant Gupta
Carl Molnar
Gaoxiang Luo
Joe Knight
Shashi Shekhar
29
0
0
29 Oct 2021
Explaining Latent Representations with a Corpus of Examples
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
84
38
0
28 Oct 2021
Counterfactual Shapley Additive Explanations
Counterfactual Shapley Additive Explanations
Emanuele Albini
Jason Long
Danial Dervovic
Daniele Magazzeni
107
51
0
27 Oct 2021
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution
  Detection: Solutions and Future Challenges
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges
Mohammadreza Salehi
Hossein Mirzaei
Dan Hendrycks
Yixuan Li
M. Rohban
Mohammad Sabokrou
OOD
165
199
0
26 Oct 2021
Partial Order in Chaos: Consensus on Feature Attributions in the
  Rashomon Set
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon Set
Gabriel Laberge
Y. Pequignot
Alexandre Mathieu
Foutse Khomh
M. Marchand
FAtt
62
6
0
26 Oct 2021
Mechanistic Interpretation of Machine Learning Inference: A Fuzzy
  Feature Importance Fusion Approach
Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach
D. Rengasamy
J. M. Mase
Mercedes Torres Torres
Benjamin Rothwell
David A. Winkler
Grazziela Figueredo
FAtt
41
2
0
22 Oct 2021
Learning to run a power network with trust
Learning to run a power network with trust
Antoine Marot
Benjamin Donnot
Karim Chaouache
Ying-Ling Lu
Qiuhua Huang
Ramij-Raja Hossain
J. Cremer
82
31
0
21 Oct 2021
A Survey on Methods and Metrics for the Assessment of Explainability
  under the Proposed AI Act
A Survey on Methods and Metrics for the Assessment of Explainability under the Proposed AI Act
Francesco Sovrano
Salvatore Sapienza
M. Palmirani
F. Vitali
69
18
0
21 Oct 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
128
234
0
20 Oct 2021
Local Explanations for Clinical Search Engine results
Local Explanations for Clinical Search Engine results
Edeline Contempré
Zoltán Szlávik
Majid Mohammadi
Erick Velazquez Godinez
A. T. Teije
Ilaria Tiddi
FAtt
24
1
0
19 Oct 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
  deep learning
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
58
7
0
19 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of
  Image-based Deep Learning Models
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
78
15
0
16 Oct 2021
Tree-based local explanations of machine learning model predictions,
  AraucanaXAI
Tree-based local explanations of machine learning model predictions, AraucanaXAI
Enea Parimbelli
G. Nicora
Szymon Wilk
W. Michalowski
Riccardo Bellazzi
37
26
0
15 Oct 2021
Training Neural Networks for Solving 1-D Optimal Piecewise Linear
  Approximation
Training Neural Networks for Solving 1-D Optimal Piecewise Linear Approximation
Hangcheng Dong
Jing-Xiao Liao
Yan Wang
Yixin Chen
Bingguo Liu
Dong Ye
Guodong Liu
313
0
0
14 Oct 2021
Truthful AI: Developing and governing AI that does not lie
Truthful AI: Developing and governing AI that does not lie
Owain Evans
Owen Cotton-Barratt
Lukas Finnveden
Adam Bales
Avital Balwit
Peter Wills
Luca Righetti
William Saunders
HILM
302
117
0
13 Oct 2021
Logic Constraints to Feature Importances
Logic Constraints to Feature Importances
Nicola Picchiotti
Marco Gori
50
0
0
13 Oct 2021
Clustering-Based Interpretation of Deep ReLU Network
Clustering-Based Interpretation of Deep ReLU Network
Nicola Picchiotti
Marco Gori
FAtt
20
0
0
13 Oct 2021
A Field Guide to Scientific XAI: Transparent and Interpretable Deep
  Learning for Bioinformatics Research
A Field Guide to Scientific XAI: Transparent and Interpretable Deep Learning for Bioinformatics Research
Thomas P. Quinn
Sunil R. Gupta
Svetha Venkatesh
Vuong Le
OOD
93
2
0
13 Oct 2021
Opportunities for Machine Learning to Accelerate Halide Perovskite
  Commercialization and Scale-Up
Opportunities for Machine Learning to Accelerate Halide Perovskite Commercialization and Scale-Up
Rishi E. Kumar
A. Tiihonen
Shijing Sun
D. Fenning
Zhe Liu
Tonio Buonassisi
37
11
0
08 Oct 2021
Explanation as a process: user-centric construction of multi-level and
  multi-modal explanations
Explanation as a process: user-centric construction of multi-level and multi-modal explanations
Bettina Finzel
David E. Tafler
Stephan Scheele
Ute Schmid
59
10
0
07 Oct 2021
Robotic Lever Manipulation using Hindsight Experience Replay and Shapley
  Additive Explanations
Robotic Lever Manipulation using Hindsight Experience Replay and Shapley Additive Explanations
Sindre Benjamin Remman
A. Lekkas
55
14
0
07 Oct 2021
Shapley variable importance clouds for interpretable machine learning
Shapley variable importance clouds for interpretable machine learning
Yilin Ning
M. Ong
Bibhas Chakraborty
B. Goldstein
Daniel Ting
Roger Vaughan
Nan Liu
FAtt
72
74
0
06 Oct 2021
BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
  Diagnosis in Breast Ultrasound Images
BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer Diagnosis in Breast Ultrasound Images
Boyu Zhang
Aleksandar Vakanski
Min Xian
73
11
0
05 Oct 2021
NEWRON: A New Generalization of the Artificial Neuron to Enhance the
  Interpretability of Neural Networks
NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks
F. Siciliano
Maria Sofia Bucarelli
Gabriele Tolomei
Fabrizio Silvestri
GNNAI4CE
47
6
0
05 Oct 2021
What is understandable in Bayesian network explanations?
What is understandable in Bayesian network explanations?
Raphaela Butz
Renée Schulz
A. Hommersom
M. V. Eekelen
FAttXAIBDL
21
0
0
04 Oct 2021
Collective eXplainable AI: Explaining Cooperative Strategies and Agent
  Contribution in Multiagent Reinforcement Learning with Shapley Values
Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning with Shapley Values
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
87
65
0
04 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Yue Liu
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
213
383
0
04 Oct 2021
Making Things Explainable vs Explaining: Requirements and Challenges
  under the GDPR
Making Things Explainable vs Explaining: Requirements and Challenges under the GDPR
Francesco Sovrano
F. Vitali
M. Palmirani
61
11
0
02 Oct 2021
Explanation-Aware Experience Replay in Rule-Dense Environments
Explanation-Aware Experience Replay in Rule-Dense Environments
Francesco Sovrano
Alex Raymond
Amanda Prorok
49
8
0
29 Sep 2021
Critical Empirical Study on Black-box Explanations in AI
Critical Empirical Study on Black-box Explanations in AI
Jean-Marie John-Mathews
34
6
0
29 Sep 2021
A Sociotechnical View of Algorithmic Fairness
A Sociotechnical View of Algorithmic Fairness
Mateusz Dolata
Stefan Feuerriegel
Gerhard Schwabe
FaML
76
101
0
27 Sep 2021
Understanding Spending Behavior: Recurrent Neural Network Explanation
  and Interpretation
Understanding Spending Behavior: Recurrent Neural Network Explanation and Interpretation
Charl Maree
C. Omlin
AI4TS
50
5
0
24 Sep 2021
Some Critical and Ethical Perspectives on the Empirical Turn of AI
  Interpretability
Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability
Jean-Marie John-Mathews
77
34
0
20 Sep 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAttCoGe
81
1
0
16 Sep 2021
DisCERN:Discovering Counterfactual Explanations using Relevance Features
  from Neighbourhoods
DisCERN:Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods
Nirmalie Wiratunga
A. Wijekoon
Ikechukwu Nkisi-Orji
Kyle Martin
Chamath Palihawadana
D. Corsar
CMLAAML
28
17
0
13 Sep 2021
An Objective Metric for Explainable AI: How and Why to Estimate the
  Degree of Explainability
An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability
Francesco Sovrano
F. Vitali
74
32
0
11 Sep 2021
Secondary control activation analysed and predicted with explainable AI
Secondary control activation analysed and predicted with explainable AI
Johannes Kruse
B. Schäfer
D. Witthaut
39
17
0
10 Sep 2021
IFBiD: Inference-Free Bias Detection
IFBiD: Inference-Free Bias Detection
Ignacio Serna
Daniel DeAlcala
Aythami Morales
Julian Fierrez
J. Ortega-Garcia
CVBM
109
11
0
09 Sep 2021
TrAISformer -- A Transformer Network with Sparse Augmented Data
  Representation and Cross Entropy Loss for AIS-based Vessel Trajectory
  Prediction
TrAISformer -- A Transformer Network with Sparse Augmented Data Representation and Cross Entropy Loss for AIS-based Vessel Trajectory Prediction
Duong Nguyen
Ronan Fablet
100
31
0
08 Sep 2021
Communicating Inferred Goals with Passive Augmented Reality and Active
  Haptic Feedback
Communicating Inferred Goals with Passive Augmented Reality and Active Haptic Feedback
J. F. Mullen
Josh Mosier
Sounak Chakrabarti
Anqi Chen
Tyler White
Dylan P. Losey
43
31
0
03 Sep 2021
A brief history of AI: how to prevent another winter (a critical review)
A brief history of AI: how to prevent another winter (a critical review)
Amirhosein Toosi
A. Bottino
Babak Saboury
E. Siegel
Arman Rahmim
52
81
0
03 Sep 2021
Parkinson's Disease Diagnosis based on Gait Cycle Analysis Through an
  Interpretable Interval Type-2 Neuro-Fuzzy System
Parkinson's Disease Diagnosis based on Gait Cycle Analysis Through an Interpretable Interval Type-2 Neuro-Fuzzy System
Armin Salimi-Badr
Mohammadreza Hashemi
H. Saffari
42
14
0
02 Sep 2021
The Role of Explainability in Assuring Safety of Machine Learning in
  Healthcare
The Role of Explainability in Assuring Safety of Machine Learning in Healthcare
Yan Jia
John McDermid
T. Lawton
Ibrahim Habli
95
48
0
01 Sep 2021
Look Who's Talking: Interpretable Machine Learning for Assessing Italian
  SMEs Credit Default
Look Who's Talking: Interpretable Machine Learning for Assessing Italian SMEs Credit Default
Lisa Crosato
C. Liberati
M. Repetto
51
1
0
31 Aug 2021
Explainable AI for engineering design: A unified approach of systems
  engineering and component-based deep learning
Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning
Philipp Geyer
M. Singh
Xia Chen
32
17
0
30 Aug 2021
Graph-guided random forest for gene set selection
Graph-guided random forest for gene set selection
Bastian Pfeifer
Hubert Baniecki
Anna Saranti
P. Biecek
Andreas Holzinger
68
21
0
26 Aug 2021
Multilingual Multi-Aspect Explainability Analyses on Machine Reading
  Comprehension Models
Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models
Yiming Cui
Weinan Zhang
Wanxiang Che
Ting Liu
Zhigang Chen
Shijin Wang
LRM
47
9
0
26 Aug 2021
Interpreting Face Inference Models using Hierarchical Network Dissection
Interpreting Face Inference Models using Hierarchical Network Dissection
Divyang Teotia
Àgata Lapedriza
Sarah Ostadabbas
CVBM
65
3
0
23 Aug 2021
Fast Accurate Defect Detection in Wafer Fabrication
Fast Accurate Defect Detection in Wafer Fabrication
T. Olschewski
15
1
0
23 Aug 2021
Burst Imaging for Light-Constrained Structure-From-Motion
Burst Imaging for Light-Constrained Structure-From-Motion
Ahalya Ravendran
M. Bryson
D. Dansereau
58
6
0
23 Aug 2021
Previous
123...212223...262728
Next