ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10045
  4. Cited By
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
v1v2 (latest)

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

22 October 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
A. Barbado
S. García
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
    XAI
ArXiv (abs)PDFHTML

Papers citing "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI"

50 / 1,389 papers shown
Title
Explaining an image classifier with a generative model conditioned by
  uncertainty
Explaining an image classifier with a generative model conditioned by uncertainty
Adrien Le Coz
Stéphane Herbin
Faouzi Adjed
66
0
0
02 Oct 2024
ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups
ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups
Jingyu Hu
Jun Hong
Mengnan Du
Weiru Liu
79
0
0
02 Oct 2024
Explainable Multi-Stakeholder Job Recommender Systems
Explainable Multi-Stakeholder Job Recommender Systems
Roan Schellingerhout
25
3
0
01 Oct 2024
Easydiagnos: a framework for accurate feature selection for automatic
  diagnosis in smart healthcare
Easydiagnos: a framework for accurate feature selection for automatic diagnosis in smart healthcare
Prasenjit Maji
Amit Kumar Mondal
Hemanta Kumar Mondal
Saraju P. Mohanty
40
1
0
01 Oct 2024
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Yi Zhang
Zhen Chen
Chih-Hong Cheng
Wenjie Ruan
Xiaowei Huang
Dezong Zhao
David Flynn
Siddartha Khastgir
Xingyu Zhao
MedIm
97
4
0
26 Sep 2024
Explaining Explaining
Explaining Explaining
S. Nirenburg
M. McShane
Kenneth W. Goodman
Sanjay Oruganti
87
0
0
26 Sep 2024
A multi-source data power load forecasting method using attention
  mechanism-based parallel cnn-gru
A multi-source data power load forecasting method using attention mechanism-based parallel cnn-gru
Chao Min
Yijia Wang
Bo Zhang
Xin Ma
Junyi Cui
AI4TS
41
0
0
26 Sep 2024
A novel application of Shapley values for large multidimensional
  time-series data: Applying explainable AI to a DNA profile classification
  neural network
A novel application of Shapley values for large multidimensional time-series data: Applying explainable AI to a DNA profile classification neural network
Lauren Elborough
Duncan Taylor
Melissa Humphries
AI4TS
31
2
0
26 Sep 2024
GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and Human Explanations
GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and Human Explanations
Fethiye Irmak Dogan
Umut Ozyurt
Gizem Cinar
Hatice Gunes
LLMAG
113
4
0
25 Sep 2024
M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
M2^22PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
Taowen Wang
Yiyang Liu
James Liang
Junhan Zhao
Yiming Cui
...
Zenglin Xu
Cheng Han
Lifu Huang
Qifan Wang
Dongfang Liu
MLLMVLMLRM
99
19
0
24 Sep 2024
The FIX Benchmark: Extracting Features Interpretable to eXperts
The FIX Benchmark: Extracting Features Interpretable to eXperts
Helen Jin
Shreya Havaldar
Chaehyeon Kim
Anton Xue
Weiqiu You
...
Bhuvnesh Jain
Amin Madani
M. Sako
Lyle Ungar
Eric Wong
81
1
0
20 Sep 2024
Interpret the Predictions of Deep Networks via Re-Label Distillation
Interpret the Predictions of Deep Networks via Re-Label Distillation
Yingying Hua
Shiming Ge
Daichi Zhang
FAtt
124
0
0
20 Sep 2024
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
78
3
0
19 Sep 2024
Enhancing Security Testing Software for Systems that Cannot be Subjected
  to the Risks of Penetration Testing Through the Incorporation of
  Multi-threading and and Other Capabilities
Enhancing Security Testing Software for Systems that Cannot be Subjected to the Risks of Penetration Testing Through the Incorporation of Multi-threading and and Other Capabilities
Matthew Tassava
Cameron Kolodjski
Jordan Milbrath
Jeremy Straub
123
1
0
17 Sep 2024
Global Lightning-Ignited Wildfires Prediction and Climate Change
  Projections based on Explainable Machine Learning Models
Global Lightning-Ignited Wildfires Prediction and Climate Change Projections based on Explainable Machine Learning Models
Assaf Shmuel
Teddy Lazebnik
Oren Glickman
Eyal Heifetz
Colin Price
59
0
0
16 Sep 2024
Evaluating Cultural Awareness of LLMs for Yoruba, Malayalam, and English
Evaluating Cultural Awareness of LLMs for Yoruba, Malayalam, and English
Fiifi Dawson
Zainab Mosunmola
Sahil Pocker
Raj Abhijit Dandekar
Rajat Dandekar
Sreedath Panat
65
4
0
14 Sep 2024
The Role of Explainable AI in Revolutionizing Human Health Monitoring: A Review
The Role of Explainable AI in Revolutionizing Human Health Monitoring: A Review
Abdullah Alharthi
Ahmed Alqurashi
Turki Alharbi
Mohammed Alammar
Nasser Aldosari
Houssem Bouchekara
Yusuf Shaaban
Mohammad Shoaib Shahriar
Abdulrahman Al Ayidh
78
0
0
11 Sep 2024
Automate Strategy Finding with LLM in Quant Investment
Automate Strategy Finding with LLM in Quant Investment
Zhizhuo Kou
Holam Yu
Junyu Luo
Jingshu Peng
Xujia Li
Chengzhong Liu
Juntao Dai
Lei Chen
Sirui Han
Yike Guo
AIFin
99
6
0
10 Sep 2024
Explainable AI: Definition and attributes of a good explanation for
  health AI
Explainable AI: Definition and attributes of a good explanation for health AI
E. Kyrimi
S. McLachlan
Jared M Wohlgemut
Zane B Perkins
David A. Lagnado
W. Marsh
the ExAIDSS Expert Group
XAI
66
1
0
09 Sep 2024
Standing on the shoulders of giants
Standing on the shoulders of giants
Lucas Felipe Ferraro Cardoso
José de Sousa Ribeiro Filho
Vitor Cirilo Araujo Santos
Regiane Silva Kawasaki Frances
Ronnie Cley de Oliveira Alves
139
1
0
05 Sep 2024
Initial Development and Evaluation of the Creative Artificial
  Intelligence through Recurring Developments and Determinations (CAIRDD)
  System
Initial Development and Evaluation of the Creative Artificial Intelligence through Recurring Developments and Determinations (CAIRDD) System
Jeremy Straub
Zach Johnson
94
0
0
03 Sep 2024
Interpreting Outliers in Time Series Data through Decoding Autoencoder
Interpreting Outliers in Time Series Data through Decoding Autoencoder
Katharina Prasse
Sascha Marton
Christian Bartelt
Robert Fuder
81
1
0
03 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAIAI4TS
306
38
0
30 Aug 2024
A prototype-based model for set classification
A prototype-based model for set classification
Mohammad Mohammadi
Sreejita Ghosh
VLM
196
1
0
25 Aug 2024
The Clever Hans Effect in Unsupervised Learning
The Clever Hans Effect in Unsupervised Learning
Jacob R. Kauffmann
Jonas Dippel
Lukas Ruff
Wojciech Samek
Klaus-Robert Müller
G. Montavon
SSLCMLHAI
100
2
0
15 Aug 2024
KAN You See It? KANs and Sentinel for Effective and Explainable Crop
  Field Segmentation
KAN You See It? KANs and Sentinel for Effective and Explainable Crop Field Segmentation
Daniele Rege Cambrin
Eleonora Poeta
Eliana Pastor
Tania Cerquitelli
Elena Baralis
Paolo Garza
72
11
0
13 Aug 2024
Case-based Explainability for Random Forest: Prototypes, Critics,
  Counter-factuals and Semi-factuals
Case-based Explainability for Random Forest: Prototypes, Critics, Counter-factuals and Semi-factuals
Gregory Yampolsky
Dhruv Desai
Mingshu Li
Stefano Pasquali
Dhagash Mehta
62
4
0
13 Aug 2024
Misfitting With AI: How Blind People Verify and Contest AI Errors
Misfitting With AI: How Blind People Verify and Contest AI Errors
Rahaf Alharbi
P. Lor
Jaylin Herskovitz
S. Schoenebeck
Robin Brewer
76
14
0
13 Aug 2024
Fooling SHAP with Output Shuffling Attacks
Fooling SHAP with Output Shuffling Attacks
Jun Yuan
Aritra Dasgupta
79
1
0
12 Aug 2024
Finding Patterns in Ambiguity: Interpretable Stress Testing in the
  Decision~Boundary
Finding Patterns in Ambiguity: Interpretable Stress Testing in the Decision~Boundary
Ines Gomes
Luís F. Teixeira
Jan N. van Rijn
Carlos Soares
André Restivo
Luís Cunha
Moisés Santos
FAtt
57
1
0
12 Aug 2024
Centralized and Federated Heart Disease Classification Models Using UCI
  Dataset and their Shapley-value Based Interpretability
Centralized and Federated Heart Disease Classification Models Using UCI Dataset and their Shapley-value Based Interpretability
Mario Padilla Rodriguez
Mohamed Nafea
FedML
55
0
0
12 Aug 2024
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of
  Large Language Models
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Upol Ehsan
Mark O. Riedl
67
6
0
09 Aug 2024
SegXAL: Explainable Active Learning for Semantic Segmentation in Driving
  Scene Scenarios
SegXAL: Explainable Active Learning for Semantic Segmentation in Driving Scene Scenarios
Sriram Mandalika
Athira Nambiar
63
1
0
08 Aug 2024
Strong and weak alignment of large language models with human values
Strong and weak alignment of large language models with human values
Mehdi Khamassi
Marceau Nahon
Raja Chatila
ALM
71
14
0
05 Aug 2024
Backward Compatibility in Attributive Explanation and Enhanced Model
  Training Method
Backward Compatibility in Attributive Explanation and Enhanced Model Training Method
Ryuta Matsuno
86
0
0
05 Aug 2024
KAN we improve on HEP classification tasks? Kolmogorov-Arnold Networks applied to an LHC physics example
KAN we improve on HEP classification tasks? Kolmogorov-Arnold Networks applied to an LHC physics example
Johannes Erdmann
F. Mausolf
Jan Lukas Späh
238
4
0
05 Aug 2024
Algorithm, Expert, or Both? Evaluating the Role of Feature Selection
  Methods on User Preferences and Reliance
Algorithm, Expert, or Both? Evaluating the Role of Feature Selection Methods on User Preferences and Reliance
Suvarthi Sarkar
Akshat Mittal
59
0
0
02 Aug 2024
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Róisín Luo
James McDermott
C. O'Riordan
AAML
56
1
0
02 Aug 2024
Explainable Emotion Decoding for Human and Computer Vision
Explainable Emotion Decoding for Human and Computer Vision
Alessio Borriero
Martina Milazzo
M. Diano
Davide Orsenigo
Maria Chiara Villa
Chiara Di Fazio
Marco Tamietto
Alan Perotti
57
0
0
01 Aug 2024
Discovering Car-following Dynamics from Trajectory Data through Deep
  Learning
Discovering Car-following Dynamics from Trajectory Data through Deep Learning
Ohay Angah
James Enouen
Xuegang (Jeff) Ban
Ban
Yan Liu
53
0
0
01 Aug 2024
Review of Explainable Graph-Based Recommender Systems
Review of Explainable Graph-Based Recommender Systems
Thanet Markchom
Huizhi Liang
James Ferryman
XAI
117
0
0
31 Jul 2024
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Supriya Manna
Dionis Barcari
148
3
0
31 Jul 2024
From Feature Importance to Natural Language Explanations Using LLMs with
  RAG
From Feature Importance to Natural Language Explanations Using LLMs with RAG
Sule Tekkesinoglu
Lars Kunze
FAtt
77
2
0
30 Jul 2024
Metaheuristic Enhanced with Feature-Based Guidance and Diversity
  Management for Solving the Capacitated Vehicle Routing Problem
Metaheuristic Enhanced with Feature-Based Guidance and Diversity Management for Solving the Capacitated Vehicle Routing Problem
Bachtiar Herdianto
Romain Billot
Flavien Lucas
Marc Sevaux
30
0
0
30 Jul 2024
An Interpretable Rule Creation Method for Black-Box Models based on
  Surrogate Trees -- SRules
An Interpretable Rule Creation Method for Black-Box Models based on Surrogate Trees -- SRules
Mario Parrón Verdasco
Esteban García-Cuesta
29
1
0
29 Jul 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
132
16
0
27 Jul 2024
Vulnerability Detection in Ethereum Smart Contracts via Machine
  Learning: A Qualitative Analysis
Vulnerability Detection in Ethereum Smart Contracts via Machine Learning: A Qualitative Analysis
Dalila Ressi
Alvise Spanò
Lorenzo Benetollo
Carla Piazza
M. Bugliesi
Sabina Rossi
69
5
0
26 Jul 2024
SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake
  Detection
SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection
Yi Zhu
Surya Koppisetti
Trang Tran
Gaurav Bharaj
118
10
0
26 Jul 2024
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time
  Series Forecasting
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting
Pierre-Daniel Arsenault
Shengrui Wang
Jean-Marc Patenande
XAIAI4TS
118
3
0
22 Jul 2024
Interpretable Concept-Based Memory Reasoning
Interpretable Concept-Based Memory Reasoning
David Debot
Pietro Barbiero
Francesco Giannini
Gabriele Ciravegna
Michelangelo Diligenti
Giuseppe Marra
LRM
103
7
0
22 Jul 2024
Previous
12345...262728
Next