ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10045
  4. Cited By
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
v1v2 (latest)

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

22 October 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
A. Barbado
S. García
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI"

50 / 1,389 papers shown
Title
Investigating the Impact of Independent Rule Fitnesses in a Learning
  Classifier System
Investigating the Impact of Independent Rule Fitnesses in a Learning Classifier System
Michael Heider
Helena Stegherr
Jonathan Wurth
Roman Sraj
J. Hähner
42
5
0
12 Jul 2022
eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised
  Semantic Segmentation
eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised Semantic Segmentation
Lu Yu
Wei Xiang
Juan Fang
Yi-Ping Phoebe Chen
Lianhua Chi
ViT
80
26
0
12 Jul 2022
Machine Learning Security in Industry: A Quantitative Survey
Machine Learning Security in Industry: A Quantitative Survey
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Battista Biggio
Katharina Krombholz
110
33
0
11 Jul 2022
Explainable AI (XAI) in Biomedical Signal and Image Processing: Promises
  and Challenges
Explainable AI (XAI) in Biomedical Signal and Image Processing: Promises and Challenges
Guang Yang
Arvind Rao
C. Fernandez-Maloigne
Vince D. Calhoun
Gloria Menegaz
28
9
0
09 Jul 2022
Deep Learning for Anomaly Detection in Log Data: A Survey
Deep Learning for Anomaly Detection in Log Data: A Survey
Max Landauer
Sebastian Onder
Florian Skopik
Markus Wurzenberger
102
97
0
08 Jul 2022
Fairness and Bias in Robot Learning
Fairness and Bias in Robot Learning
Laura Londoño
Juana Valeria Hurtado
Nora Hertz
P. Kellmeyer
S. Voeneky
Abhinav Valada
FaML
80
9
0
07 Jul 2022
Automating the Design and Development of Gradient Descent Trained Expert
  System Networks
Automating the Design and Development of Gradient Descent Trained Expert System Networks
Jeremy Straub
69
10
0
04 Jul 2022
Features of a Splashing Drop on a Solid Surface and the Temporal
  Evolution extracted through Image-Sequence Classification using an
  Interpretable Feedforward Neural Network
Features of a Splashing Drop on a Solid Surface and the Temporal Evolution extracted through Image-Sequence Classification using an Interpretable Feedforward Neural Network
Jingzu Yee
Daichi Igarashi(五十嵐大地)
A. Yamanaka
Yoshiyuki Tagawa(田川義之)
36
1
0
03 Jul 2022
Interpretable by Design: Learning Predictors by Composing Interpretable
  Queries
Interpretable by Design: Learning Predictors by Composing Interpretable Queries
Aditya Chattopadhyay
Stewart Slocum
B. Haeffele
René Vidal
D. Geman
113
24
0
03 Jul 2022
Learning Classifier Systems for Self-Explaining Socio-Technical-Systems
Learning Classifier Systems for Self-Explaining Socio-Technical-Systems
Michael Heider
Helena Stegherr
R. Nordsieck
J. Hähner
21
9
0
01 Jul 2022
On Computing Probabilistic Explanations for Decision Trees
On Computing Probabilistic Explanations for Decision Trees
Marcelo Arenas
Pablo Barceló
M. Romero
Bernardo Subercaseaux
FAtt
92
42
0
30 Jun 2022
Why we do need Explainable AI for Healthcare
Why we do need Explainable AI for Healthcare
Giovanni Cina
Tabea E. Rober
Rob Goedhart
Ilker Birbil
71
14
0
30 Jun 2022
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Moritz Renftle
Holger Trittenbach
M. Poznic
Reinhard Heil
ELM
75
6
0
28 Jun 2022
Reducing Annotation Need in Self-Explanatory Models for Lung Nodule
  Diagnosis
Reducing Annotation Need in Self-Explanatory Models for Lung Nodule Diagnosis
Jiahao Lu
Chong Yin
Oswin Krause
Kenny Erleben
M. B. Nielsen
S. Darkner
MedIm
77
3
0
27 Jun 2022
Thermodynamics-inspired Explanations of Artificial Intelligence
Thermodynamics-inspired Explanations of Artificial Intelligence
S. Mehdi
P. Tiwary
AI4CE
65
18
0
27 Jun 2022
RES: A Robust Framework for Guiding Visual Explanation
RES: A Robust Framework for Guiding Visual Explanation
Yuyang Gao
Tong Sun
Guangji Bai
Siyi Gu
S. Hong
Liang Zhao
FAttAAMLXAI
88
33
0
27 Jun 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of
  Prediction Functions
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
138
4
0
24 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
136
147
0
22 Jun 2022
Connecting Algorithmic Research and Usage Contexts: A Perspective of
  Contextualized Evaluation for Explainable AI
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Q. V. Liao
Yunfeng Zhang
Ronny Luss
Finale Doshi-Velez
Amit Dhurandhar
164
83
0
22 Jun 2022
Stop ordering machine learning algorithms by their explainability! A
  user-centered investigation of performance and explainability
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
L. Herm
Kai Heinrich
Jonas Wanner
Christian Janiesch
38
88
0
20 Jun 2022
Benchmarking Heterogeneous Treatment Effect Models through the Lens of
  Interpretability
Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability
Jonathan Crabbé
Alicia Curth
Ioana Bica
M. Schaar
CML
110
16
0
16 Jun 2022
Multi-Objective Hyperparameter Optimization in Machine Learning -- An
  Overview
Multi-Objective Hyperparameter Optimization in Machine Learning -- An Overview
Florian Karl
Tobias Pielok
Julia Moosbauer
Florian Pfisterer
Stefan Coors
...
Jakob Richter
Michel Lang
Eduardo C. Garrido-Merchán
Juergen Branke
B. Bischl
AI4CE
86
61
0
15 Jun 2022
A Methodology and Software Architecture to Support
  Explainability-by-Design
A Methodology and Software Architecture to Support Explainability-by-Design
T. D. Huynh
Niko Tsakalakis
Ayah Helal
Sophie Stalla-Bourdillon
Luc Moreau
55
5
0
13 Jun 2022
Efficient Human-in-the-loop System for Guiding DNNs Attention
Efficient Human-in-the-loop System for Guiding DNNs Attention
Yi He
Xi Yang
Chia-Ming Chang
Haoran Xie
Takeo Igarashi
87
8
0
13 Jun 2022
Ask to Know More: Generating Counterfactual Explanations for Fake Claims
Ask to Know More: Generating Counterfactual Explanations for Fake Claims
Shih-Chieh Dai
Yi-Li Hsu
Aiping Xiong
Lun-Wei Ku
OffRL
50
24
0
10 Jun 2022
ECLAD: Extracting Concepts with Local Aggregated Descriptors
ECLAD: Extracting Concepts with Local Aggregated Descriptors
Andres Felipe Posada-Moreno
N. Surya
Sebastian Trimpe
61
13
0
09 Jun 2022
A taxonomy of explanations to support Explainability-by-Design
A taxonomy of explanations to support Explainability-by-Design
Niko Tsakalakis
Sophie Stalla-Bourdillon
T. D. Huynh
Luc Moreau
XAI
17
2
0
09 Jun 2022
Balanced background and explanation data are needed in explaining deep
  learning models with SHAP: An empirical study on clinical decision making
Balanced background and explanation data are needed in explaining deep learning models with SHAP: An empirical study on clinical decision making
Mingxuan Liu
Yilin Ning
Han Yuan
M. Ong
Nan Liu
FAtt
43
1
0
08 Jun 2022
Towards Explainable Social Agent Authoring tools: A case study on
  FAtiMA-Toolkit
Towards Explainable Social Agent Authoring tools: A case study on FAtiMA-Toolkit
Manuel Guimarães
Joana Campos
Pedro A. Santos
João Dias
R. Prada
29
1
0
07 Jun 2022
Towards better Interpretable and Generalizable AD detection using
  Collective Artificial Intelligence
Towards better Interpretable and Generalizable AD detection using Collective Artificial Intelligence
H. Nguyen
Michael Clement
Boris Mansencal
Pierrick Coupé
MedIm
63
8
0
07 Jun 2022
Explainable Artificial Intelligence (XAI) for Internet of Things: A
  Survey
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
İbrahim Kök
Feyza Yıldırım Okay
Özgecan Muyanlı
S. Özdemir
XAI
74
55
0
07 Jun 2022
Explainability in Mechanism Design: Recent Advances and the Road Ahead
Explainability in Mechanism Design: Recent Advances and the Road Ahead
Sharadhi Alape Suryanarayana
David Sarne
Sarit Kraus
61
6
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
93
11
0
06 Jun 2022
Towards Responsible AI for Financial Transactions
Towards Responsible AI for Financial Transactions
Charl Maree
Jan Erik Modal
C. Omlin
AAML
105
17
0
06 Jun 2022
Interpretable Models Capable of Handling Systematic Missingness in
  Imbalanced Classes and Heterogeneous Datasets
Interpretable Models Capable of Handling Systematic Missingness in Imbalanced Classes and Heterogeneous Datasets
Sreejita Ghosh
E. Baranowski
Michael Biehl
W. Arlt
Peter Tiño
the United Kingdom Utrecht University
66
6
0
04 Jun 2022
Future Artificial Intelligence tools and perspectives in medicine
Future Artificial Intelligence tools and perspectives in medicine
Ahmad Chaddad
Y. Katib
Lama Hassan
89
8
0
04 Jun 2022
Analysis, Characterization, Prediction and Attribution of Extreme
  Atmospheric Events with Machine Learning: a Review
Analysis, Characterization, Prediction and Attribution of Extreme Atmospheric Events with Machine Learning: a Review
S. Salcedo-Sanz
Jorge Pérez-Aracil
G. Ascenso
Javier Del Ser
D. Casillas-Pérez
...
D. Barriopedro
R. García-Herrera
Marcello Restelli
M. Giuliani
A. Castelletti
AI4Cl
78
13
0
03 Jun 2022
XAI for Cybersecurity: State of the Art, Challenges, Open Issues and
  Future Directions
XAI for Cybersecurity: State of the Art, Challenges, Open Issues and Future Directions
Gautam Srivastava
Rutvij H. Jhaveri
S. Bhattacharya
Sharnil Pandya
Rajeswari
Praveen Kumar Reddy Maddikunta
Gokul Yenduri
Jon G. Hall
M. Alazab
Thippa Reddy Gadekallu
90
56
0
03 Jun 2022
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
  Towards Causal Explanations of Probabilistic Forecasts
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency Towards Causal Explanations of Probabilistic Forecasts
Chirag Raman
Hayley Hung
Marco Loog
87
3
0
01 Jun 2022
Putting AI Ethics into Practice: The Hourglass Model of Organizational
  AI Governance
Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance
Matti Mäntymäki
Matti Minkkinen
Teemu Birkstedt
M. Viljanen
84
22
0
01 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
126
20
0
31 May 2022
Grid HTM: Hierarchical Temporal Memory for Anomaly Detection in Videos
Grid HTM: Hierarchical Temporal Memory for Anomaly Detection in Videos
V. Monakhov
Vajira Thambawita
Pål Halvorsen
Michael A. Riegler
AI4TS
26
0
0
30 May 2022
Multi-Fault Diagnosis Of Industrial Rotating Machines Using Data-Driven
  Approach: A Review Of Two Decades Of Research
Multi-Fault Diagnosis Of Industrial Rotating Machines Using Data-Driven Approach: A Review Of Two Decades Of Research
S. Gawde
S. Patil
Shylendra Kumar
P. Kamat
K. Kotecha
Ajith Abraham
AI4CE
104
51
0
30 May 2022
Unfooling Perturbation-Based Post Hoc Explainers
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
92
15
0
29 May 2022
Interpretation Quality Score for Measuring the Quality of
  interpretability methods
Interpretation Quality Score for Measuring the Quality of interpretability methods
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
XAI
111
5
0
24 May 2022
Explanatory machine learning for sequential human teaching
Explanatory machine learning for sequential human teaching
L. Ai
Johannes Langer
Stephen Muggleton
Ute Schmid
102
5
0
20 May 2022
On Tackling Explanation Redundancy in Decision Trees
On Tackling Explanation Redundancy in Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
100
64
0
20 May 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAIFAtt
97
17
0
17 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
94
58
0
15 May 2022
Grounding Explainability Within the Context of Global South in XAI
Grounding Explainability Within the Context of Global South in XAI
Deepa Singh
M. Slupczynski
Ajit G. Pillai
Vinoth Pandian Sermuga Pandian
34
3
0
13 May 2022
Previous
123...171819...262728
Next