ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
A Theoretical Framework for AI Models Explainability with Application in
  Biomedicine
A Theoretical Framework for AI Models Explainability with Application in Biomedicine
Matteo Rizzo
Alberto Veneri
A. Albarelli
Claudio Lucchese
Marco Nobile
Cristina Conati
XAI
83
9
0
29 Dec 2022
Multimodal Explainability via Latent Shift applied to COVID-19
  stratification
Multimodal Explainability via Latent Shift applied to COVID-19 stratification
V. Guarrasi
L. Tronchin
Domenico Albano
E. Faiella
Deborah Fazzini
D. Santucci
Paolo Soda
77
24
0
28 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
114
78
0
25 Dec 2022
Interpretability and causal discovery of the machine learning models to
  predict the production of CBM wells after hydraulic fracturing
Interpretability and causal discovery of the machine learning models to predict the production of CBM wells after hydraulic fracturing
Chao Min
Guo-quan Wen
Liang Gou
Xiaogang Li
Zhaozhong Yang
CML
41
12
0
21 Dec 2022
Context-dependent Explainability and Contestability for Trustworthy
  Medical Artificial Intelligence: Misclassification Identification of
  Morbidity Recognition Models in Preterm Infants
Context-dependent Explainability and Contestability for Trustworthy Medical Artificial Intelligence: Misclassification Identification of Morbidity Recognition Models in Preterm Infants
Isil Guzey
Ozlem Ucar
N. A. Çiftdemir
B. Acunaş
99
2
0
17 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
79
17
0
16 Dec 2022
Interpretable models for extrapolation in scientific machine learning
Interpretable models for extrapolation in scientific machine learning
Eric S. Muckley
J. Saal
B. Meredig
C. Roper
James H. Martin
51
36
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh Chawla
76
1
0
15 Dec 2022
Dual Accuracy-Quality-Driven Neural Network for Prediction Interval
  Generation
Dual Accuracy-Quality-Driven Neural Network for Prediction Interval Generation
Giorgio Morales
John W. Sheppard
62
5
0
13 Dec 2022
On Computing Probabilistic Abductive Explanations
On Computing Probabilistic Abductive Explanations
Yacine Izza
Xuanxiang Huang
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAttXAI
111
20
0
12 Dec 2022
Evaluation and Improvement of Interpretability for Self-Explainable
  Part-Prototype Networks
Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks
Qihan Huang
Mengqi Xue
Wenqi Huang
Haofei Zhang
Mingli Song
Yongcheng Jing
Mingli Song
AAML
74
28
0
12 Dec 2022
Causality-Aware Local Interpretable Model-Agnostic Explanations
Causality-Aware Local Interpretable Model-Agnostic Explanations
Martina Cinquini
Riccardo Guidotti
CML
80
1
0
10 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
76
42
0
07 Dec 2022
Fairness and Explainability: Bridging the Gap Towards Fair Model
  Explanations
Fairness and Explainability: Bridging the Gap Towards Fair Model Explanations
Yuying Zhao
Yu Wang
Hanyu Wang
FaML
88
16
0
07 Dec 2022
Truthful Meta-Explanations for Local Interpretability of Machine
  Learning Models
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
52
3
0
07 Dec 2022
Holding AI to Account: Challenges for the Delivery of Trustworthy AI in
  Healthcare
Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare
Rob Procter
P. Tolmie
M. Rouncefield
39
33
0
29 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
67
21
0
27 Nov 2022
Testing the effectiveness of saliency-based explainability in NLP using
  randomized survey-based experiments
Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments
Adel Rahimi
Shaurya Jain
FAtt
94
0
0
25 Nov 2022
EVNet: An Explainable Deep Network for Dimension Reduction
EVNet: An Explainable Deep Network for Dimension Reduction
Z. Zang
Sheng-Hsien Cheng
Linyan Lu
Hanchen Xia
Liangyu Li
Yaoting Sun
Yongjie Xu
Lei Shang
Baigui Sun
Stan Z. Li
FAtt
80
17
0
21 Nov 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
51
6
0
19 Nov 2022
Evaluating generative models in high energy physics
Evaluating generative models in high energy physics
Raghav Kansal
Anni Li
Javier Mauricio Duarte
N. Chernyavskaya
M. Pierini
B. Orzari
T. Tomei
MedIm
71
36
0
18 Nov 2022
Explainability Via Causal Self-Talk
Explainability Via Causal Self-Talk
Nicholas A. Roy
Junkyung Kim
Neil C. Rabinowitz
CML
87
7
0
17 Nov 2022
Supervised Feature Compression based on Counterfactual Analysis
Supervised Feature Compression based on Counterfactual Analysis
V. Piccialli
Dolores Romero Morales
Cecilia Salvatore
CML
49
2
0
17 Nov 2022
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
  Medicine
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
Ahmad Chaddad
Qizong Lu
Jiali Li
Y. Katib
R. Kateb
C. Tanougast
Ahmed Bouridane
Ahmed Abdulkadir
OOD
72
38
0
17 Nov 2022
What Images are More Memorable to Machines?
What Images are More Memorable to Machines?
Junlin Han
Huangying Zhan
Jie Hong
Pengfei Fang
Hongdong Li
L. Petersson
Ian Reid
71
3
0
14 Nov 2022
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI: Operationalizing Seamful Design in Explainable AI
Upol Ehsan
Q. V. Liao
Samir Passi
Mark O. Riedl
Hal Daumé
91
23
0
12 Nov 2022
Explainability in Practice: Estimating Electrification Rates from Mobile
  Phone Data in Senegal
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
Laura State
Hadrien Salat
S. Rubrichi
Z. Smoreda
44
1
0
11 Nov 2022
REVEL Framework to measure Local Linear Explanations for black-box
  models: Deep Learning Image Classification case of study
REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study
Iván Sevillano-García
Julián Luengo-Martín
Francisco Herrera
XAIFAtt
51
9
0
11 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAIFAtt
119
19
0
10 Nov 2022
Interpretable Explainability in Facial Emotion Recognition and
  Gamification for Data Collection
Interpretable Explainability in Facial Emotion Recognition and Gamification for Data Collection
Krist Shingjergji
Deniz Iren
Felix Böttger
Corrie C. Urlings
R. Klemke
CVBM
57
3
0
09 Nov 2022
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals
  Learned Features Similar to Diagnostic Criteria
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals Learned Features Similar to Diagnostic Criteria
Theresa Bender
J. Beinecke
D. Krefting
Carolin Müller
Henning Dathe
T. Seidler
Nicolai Spicher
Anne-Christin Hauschild
FAtt
36
27
0
03 Nov 2022
Explainable AI over the Internet of Things (IoT): Overview,
  State-of-the-Art and Future Directions
Explainable AI over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions
Senthil Kumar Jagatheesaperumal
Quoc-Viet Pham
Rukhsana Ruby
Zhaohui Yang
Chunmei Xu
Zhaoyang Zhang
96
56
0
02 Nov 2022
Evaluation Metrics for Symbolic Knowledge Extracted from Machine
  Learning Black Boxes: A Discussion Paper
Evaluation Metrics for Symbolic Knowledge Extracted from Machine Learning Black Boxes: A Discussion Paper
Federico Sabbatini
Roberta Calegari
49
2
0
01 Nov 2022
Clustering-Based Approaches for Symbolic Knowledge Extraction
Clustering-Based Approaches for Symbolic Knowledge Extraction
Federico Sabbatini
Roberta Calegari
13
1
0
01 Nov 2022
Artificial intelligence in government: Concepts, standards, and a
  unified framework
Artificial intelligence in government: Concepts, standards, and a unified framework
Vince J. Straub
Deborah Morgan
Jonathan Bright
Helen Z. Margetts
AI4TS
75
36
0
31 Oct 2022
Explaining the Explainers in Graph Neural Networks: a Comparative Study
Explaining the Explainers in Graph Neural Networks: a Comparative Study
Antonio Longa
Steve Azzolin
G. Santin
G. Cencetti
Pietro Lio
Bruno Lepri
Andrea Passerini
107
31
0
27 Oct 2022
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR)
  for Metaverses
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses
Adnan Qayyum
M. A. Butt
Hassan Ali
Muhammad Usman
O. Halabi
Ala I. Al-Fuqaha
Q. Abbasi
Muhammad Ali Imran
Junaid Qadir
84
37
0
24 Oct 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRMXAI
137
40
0
24 Oct 2022
Explanation Shift: Detecting distribution shifts on tabular data via the
  explanation space
Explanation Shift: Detecting distribution shifts on tabular data via the explanation space
Carlos Mougan
Klaus Broelemann
Gjergji Kasneci
T. Tiropanis
Steffen Staab
FAtt
59
7
0
22 Oct 2022
Trustworthy Human Computation: A Survey
Trustworthy Human Computation: A Survey
H. Kashima
S. Oyama
Hiromi Arai
Junichiro Mori
84
1
0
22 Oct 2022
A Survey on Graph Counterfactual Explanations: Definitions, Methods,
  Evaluation, and Research Challenges
A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
Mario Alfonso Prado-Romero
Bardh Prenkaj
Giovanni Stilo
F. Giannotti
CML
139
33
0
21 Oct 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
108
103
0
20 Oct 2022
Black Box Model Explanations and the Human Interpretability Expectations
  -- An Analysis in the Context of Homicide Prediction
Black Box Model Explanations and the Human Interpretability Expectations -- An Analysis in the Context of Homicide Prediction
José Ribeiro
Nikolas Carneiro
Ronnie Cley de Oliveira Alves
52
0
0
19 Oct 2022
Explainable Slot Type Attentions to Improve Joint Intent Detection and
  Slot Filling
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
Kalpa Gunaratna
Vijay Srinivasan
Akhila Yerukola
Hongxia Jin
64
7
0
19 Oct 2022
Explanations Based on Item Response Theory (eXirt): A Model-Specific
  Method to Explain Tree-Ensemble Model in Trust Perspective
Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective
José de Sousa Ribeiro Filho
Lucas F. F. Cardoso
R. Silva
Vitor Cirilo Araujo Santos
Nikolas Carneiro
Ronnie Cley de Oliveira Alves
47
4
0
18 Oct 2022
On the Impact of Temporal Concept Drift on Model Explanations
On the Impact of Temporal Concept Drift on Model Explanations
Zhixue Zhao
G. Chrysostomou
Kalina Bontcheva
Nikolaos Aletras
99
16
0
17 Oct 2022
A.I. Robustness: a Human-Centered Perspective on Technological
  Challenges and Opportunities
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti
Lorenzo Corti
Agathe Balayn
Mireia Yurrita
Philip Lippmann
Marco Brambilla
Jie Yang
84
14
0
17 Oct 2022
A Survey on Explainable Anomaly Detection
A Survey on Explainable Anomaly Detection
Zhong Li
Yuxuan Zhu
M. Leeuwen
115
79
0
13 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
65
86
0
13 Oct 2022
On the Evaluation of the Plausibility and Faithfulness of Sentiment
  Analysis Explanations
On the Evaluation of the Plausibility and Faithfulness of Sentiment Analysis Explanations
Julia El Zini
Mohamad Mansour
Basel Mousi
M. Awad
64
8
0
13 Oct 2022
Previous
123...8910...212223
Next