ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXivPDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 411 papers shown
Title
Designerly Understanding: Information Needs for Model Transparency to
  Support Design Ideation for AI-Powered User Experience
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Q. V. Liao
Hariharan Subramonyam
Jennifer Wang
Jennifer Wortman Vaughan
HAI
33
58
0
21 Feb 2023
Why is the prediction wrong? Towards underfitting case explanation via
  meta-classification
Why is the prediction wrong? Towards underfitting case explanation via meta-classification
Sheng Zhou
P. Blanchart
M. Crucianu
Marin Ferecatu
11
1
0
20 Feb 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
34
1
0
17 Feb 2023
Explaining text classifiers through progressive neighborhood
  approximation with realistic samples
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
22
0
0
11 Feb 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
21
7
0
07 Feb 2023
Personalized Interpretable Classification
Personalized Interpretable Classification
Zengyou He
Yifan Tang
Yifan Tang
Lianyu Hu
Yan Liu
Yan Liu
25
0
0
06 Feb 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
23
57
0
01 Feb 2023
Explainable Deep Reinforcement Learning: State of the Art and Challenges
Explainable Deep Reinforcement Learning: State of the Art and Challenges
G. Vouros
XAI
50
76
0
24 Jan 2023
Interpretability in Activation Space Analysis of Transformers: A Focused
  Survey
Interpretability in Activation Space Analysis of Transformers: A Focused Survey
Soniya Vijayakumar
AI4CE
35
3
0
22 Jan 2023
Towards Rigorous Understanding of Neural Networks via
  Semantics-preserving Transformations
Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations
Maximilian Schlüter
Gerrit Nolte
Alnis Murtovi
Bernhard Steffen
29
6
0
19 Jan 2023
Exemplars and Counterexemplars Explanations for Image Classifiers,
  Targeting Skin Lesion Labeling
Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
C. Metta
Riccardo Guidotti
Yuan Yin
Patrick Gallinari
S. Rinzivillo
MedIm
25
11
0
18 Jan 2023
Boosting Synthetic Data Generation with Effective Nonlinear Causal
  Discovery
Boosting Synthetic Data Generation with Effective Nonlinear Causal Discovery
Martina Cinquini
F. Giannotti
Riccardo Guidotti
32
10
0
18 Jan 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
44
104
0
18 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
36
22
0
17 Jan 2023
Mapping Knowledge Representations to Concepts: A Review and New
  Perspectives
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Lars Holmberg
P. Davidsson
Per Linde
34
1
0
31 Dec 2022
Multimodal Explainability via Latent Shift applied to COVID-19
  stratification
Multimodal Explainability via Latent Shift applied to COVID-19 stratification
V. Guarrasi
L. Tronchin
Domenico Albano
E. Faiella
Deborah Fazzini
D. Santucci
Paolo Soda
24
22
0
28 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
29
68
0
25 Dec 2022
Interpretability and causal discovery of the machine learning models to
  predict the production of CBM wells after hydraulic fracturing
Interpretability and causal discovery of the machine learning models to predict the production of CBM wells after hydraulic fracturing
Chao Min
Guo-quan Wen
Liang Gou
Xiaogang Li
Zhaozhong Yang
CML
13
10
0
21 Dec 2022
Context-dependent Explainability and Contestability for Trustworthy
  Medical Artificial Intelligence: Misclassification Identification of
  Morbidity Recognition Models in Preterm Infants
Context-dependent Explainability and Contestability for Trustworthy Medical Artificial Intelligence: Misclassification Identification of Morbidity Recognition Models in Preterm Infants
Isil Guzey
Ozlem Ucar
N. A. Çiftdemir
B. Acunaş
23
1
0
17 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
35
15
0
16 Dec 2022
Interpretable models for extrapolation in scientific machine learning
Interpretable models for extrapolation in scientific machine learning
Eric S. Muckley
J. Saal
B. Meredig
C. Roper
James H. Martin
21
34
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
24
1
0
15 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
29
39
0
07 Dec 2022
Fairness and Explainability: Bridging the Gap Towards Fair Model
  Explanations
Fairness and Explainability: Bridging the Gap Towards Fair Model Explanations
Yuying Zhao
Yu-Chiang Frank Wang
Tyler Derr
FaML
33
13
0
07 Dec 2022
Truthful Meta-Explanations for Local Interpretability of Machine
  Learning Models
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
18
3
0
07 Dec 2022
Holding AI to Account: Challenges for the Delivery of Trustworthy AI in
  Healthcare
Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare
Rob Procter
P. Tolmie
M. Rouncefield
11
31
0
29 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
32
18
0
27 Nov 2022
Testing the effectiveness of saliency-based explainability in NLP using
  randomized survey-based experiments
Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments
Adel Rahimi
Shaurya Jain
FAtt
13
0
0
25 Nov 2022
Concept-based Explanations using Non-negative Concept Activation Vectors
  and Decision Tree for CNN Models
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
26
6
0
19 Nov 2022
Supervised Feature Compression based on Counterfactual Analysis
Supervised Feature Compression based on Counterfactual Analysis
V. Piccialli
Dolores Romero Morales
Cecilia Salvatore
CML
32
2
0
17 Nov 2022
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
  Medicine
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
A. Chaddad
Qizong Lu
Jiali Li
Y. Katib
R. Kateb
C. Tanougast
Ahmed Bouridane
Ahmed Abdulkadir
OOD
24
38
0
17 Nov 2022
Explainability in Practice: Estimating Electrification Rates from Mobile
  Phone Data in Senegal
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
Laura State
Hadrien Salat
S. Rubrichi
Z. Smoreda
18
1
0
11 Nov 2022
REVEL Framework to measure Local Linear Explanations for black-box
  models: Deep Learning Image Classification case of study
REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study
Iván Sevillano-García
Julián Luengo-Martín
Francisco Herrera
XAI
FAtt
21
7
0
11 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
33
18
0
10 Nov 2022
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals
  Learned Features Similar to Diagnostic Criteria
Analysis of a Deep Learning Model for 12-Lead ECG Classification Reveals Learned Features Similar to Diagnostic Criteria
Theresa Bender
J. Beinecke
D. Krefting
Carolin Müller
Henning Dathe
T. Seidler
Nicolai Spicher
Anne-Christin Hauschild
FAtt
16
25
0
03 Nov 2022
Evaluation Metrics for Symbolic Knowledge Extracted from Machine
  Learning Black Boxes: A Discussion Paper
Evaluation Metrics for Symbolic Knowledge Extracted from Machine Learning Black Boxes: A Discussion Paper
Federico Sabbatini
Roberta Calegari
14
1
0
01 Nov 2022
Clustering-Based Approaches for Symbolic Knowledge Extraction
Clustering-Based Approaches for Symbolic Knowledge Extraction
Federico Sabbatini
Roberta Calegari
6
1
0
01 Nov 2022
Artificial intelligence in government: Concepts, standards, and a
  unified framework
Artificial intelligence in government: Concepts, standards, and a unified framework
Vince J. Straub
Deborah Morgan
Jonathan Bright
Helen Z. Margetts
AI4TS
32
31
0
31 Oct 2022
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR)
  for Metaverses
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses
Adnan Qayyum
M. A. Butt
Hassan Ali
Muhammad Usman
O. Halabi
Ala I. Al-Fuqaha
Q. Abbasi
Muhammad Ali Imran
Junaid Qadir
30
32
0
24 Oct 2022
Logic-Based Explainability in Machine Learning
Logic-Based Explainability in Machine Learning
Sasha Rubin
LRM
XAI
50
39
0
24 Oct 2022
Explanation Shift: Detecting distribution shifts on tabular data via the
  explanation space
Explanation Shift: Detecting distribution shifts on tabular data via the explanation space
Carlos Mougan
Klaus Broelemann
Gjergji Kasneci
T. Tiropanis
Steffen Staab
FAtt
27
7
0
22 Oct 2022
Explainable Slot Type Attentions to Improve Joint Intent Detection and
  Slot Filling
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
Kalpa Gunaratna
Vijay Srinivasan
Akhila Yerukola
Hongxia Jin
29
6
0
19 Oct 2022
Explanations Based on Item Response Theory (eXirt): A Model-Specific
  Method to Explain Tree-Ensemble Model in Trust Perspective
Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective
José de Sousa Ribeiro Filho
Lucas F. F. Cardoso
R. Silva
Vitor Cirilo Araujo Santos
Nikolas Carneiro
Ronnie Cley de Oliveira Alves
18
4
0
18 Oct 2022
On the Explainability of Natural Language Processing Deep Models
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
29
82
0
13 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
80
35
0
28 Sep 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
32
29
0
26 Sep 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
45
46
0
23 Sep 2022
The Ability of Image-Language Explainable Models to Resemble Domain
  Expertise
The Ability of Image-Language Explainable Models to Resemble Domain Expertise
P. Werner
Anna Zapaishchykova
Ujjwal Ratan
48
2
0
19 Sep 2022
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
  by enhancing SHapley Additive exPlanations
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations
Ricardo Müller
Marco Schreyer
Timur Sattarov
Damian Borth
AAML
MLAU
29
7
0
19 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain
  inferred decisions of Deep Learning Models
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
34
1
0
19 Sep 2022
Previous
123456789
Next