ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
Boundary-Aware Uncertainty for Feature Attribution Explainers
Boundary-Aware Uncertainty for Feature Attribution Explainers
Davin Hill
A. Masoomi
Max Torop
S. Ghimire
Jennifer Dy
FAtt
147
3
0
05 Oct 2022
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
  from Cognitive Psychology
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology
R. Verhagen
Siddharth Mehrotra
Mark Antonius Neerincx
Catholijn M. Jonker
Myrthe L. Tielman
42
1
0
05 Oct 2022
Explanation-by-Example Based on Item Response Theory
Explanation-by-Example Based on Item Response Theory
Lucas F. F. Cardoso
Joseph Ribeiro
Vitor Santos
R. Silva
M. Mota
R. Prudêncio
Ronnie Cley de Oliveira Alves
87
5
0
04 Oct 2022
"Help Me Help the AI": Understanding How Explainability Can Support
  Human-AI Interaction
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
101
117
0
02 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
132
37
0
28 Sep 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
84
28
0
26 Sep 2022
AI, Opacity, and Personal Autonomy
AI, Opacity, and Personal Autonomy
Bram Vaassen
FaMLMLAU
39
29
0
25 Sep 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
148
53
0
23 Sep 2022
The Ability of Image-Language Explainable Models to Resemble Domain
  Expertise
The Ability of Image-Language Explainable Models to Resemble Domain Expertise
P. Werner
Anna Zapaishchykova
Ujjwal Ratan
79
2
0
19 Sep 2022
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
  by enhancing SHapley Additive exPlanations
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations
Ricardo Müller
Marco Schreyer
Timur Sattarov
Damian Borth
AAMLMLAU
127
7
0
19 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain
  inferred decisions of Deep Learning Models
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
64
1
0
19 Sep 2022
Enhanced Fairness Testing via Generating Effective Initial Individual
  Discriminatory Instances
Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances
Minghua Ma
Zhao Tian
Max Hort
Federica Sarro
Hongyu Zhang
Qingwei Lin
Dongmei Zhang
55
5
0
17 Sep 2022
Computing Abductive Explanations for Boosted Trees
Computing Abductive Explanations for Boosted Trees
Gilles Audemard
Jean-Marie Lagniez
Pierre Marquis
N. Szczepanski
84
14
0
16 Sep 2022
Studying the explanations for the automated prediction of bug and
  non-bug issues using LIME and SHAP
Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP
Benjamin Ledel
Steffen Herbold
FAtt
134
4
0
15 Sep 2022
Symbolic Knowledge Extraction from Opaque Predictors Applied to
  Cosmic-Ray Data Gathered with LISA Pathfinder
Symbolic Knowledge Extraction from Opaque Predictors Applied to Cosmic-Ray Data Gathered with LISA Pathfinder
Federico Sabbatini
C. Grimani
48
11
0
10 Sep 2022
Shapley value-based approaches to explain the robustness of classifiers
  in machine learning
Shapley value-based approaches to explain the robustness of classifiers in machine learning
G. D. Pelegrina
S. Siraj
FAtt
28
3
0
09 Sep 2022
Change Detection for Local Explainability in Evolving Data Streams
Change Detection for Local Explainability in Evolving Data Streams
Johannes Haug
Alexander Braun
Stefan Zurn
Gjergji Kasneci
FAtt
40
10
0
06 Sep 2022
Explaining Machine Learning Models in Natural Conversations: Towards a
  Conversational XAI Agent
Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent
Van Bach Nguyen
Jorg Schlotterer
C. Seifert
AILaw
38
12
0
06 Sep 2022
Making the black-box brighter: interpreting machine learning algorithm
  for forecasting drilling accidents
Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents
E. Gurina
Nikita Klyuchnikov
Ksenia Antipova
D. Koroteev
FAtt
76
8
0
06 Sep 2022
Visualization Of Class Activation Maps To Explain AI Classification Of
  Network Packet Captures
Visualization Of Class Activation Maps To Explain AI Classification Of Network Packet Captures
Igor Cherepanov
Alex Ulmer
Jonathan Geraldi Joewono
Jörn Kohlhammer
FAtt
58
5
0
05 Sep 2022
INTERACTION: A Generative XAI Framework for Natural Language Inference
  Explanations
INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations
Jialin Yu
Alexandra I. Cristea
Anoushka Harit
Zhongtian Sun
O. Aduragba
Lei Shi
Noura Al Moubayed
74
10
0
02 Sep 2022
A Framework for Inherently Interpretable Optimization Models
A Framework for Inherently Interpretable Optimization Models
Marc Goerigk
Michael Hartisch
AI4CE
100
17
0
26 Aug 2022
Augmented cross-selling through explainable AI -- a case from energy
  retailing
Augmented cross-selling through explainable AI -- a case from energy retailing
Felix Haag
K. Hopf
Pedro Menelau Vasconcelos
Thorsten Staake
62
4
0
24 Aug 2022
A Nested Genetic Algorithm for Explaining Classification Data Sets with
  Decision Rules
A Nested Genetic Algorithm for Explaining Classification Data Sets with Decision Rules
P. Matt
Rosina Ziegler
Danilo Brajovic
Marco Roth
Marco F. Huber
57
2
0
23 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
110
13
0
19 Aug 2022
Causal Intervention Improves Implicit Sentiment Analysis
Causal Intervention Improves Implicit Sentiment Analysis
Siyin Wang
Jie Zhou
Changzhi Sun
Junjie Ye
Tao Gui
Qi Zhang
Xuanjing Huang
69
18
0
19 Aug 2022
Quality Diversity Evolutionary Learning of Decision Trees
Quality Diversity Evolutionary Learning of Decision Trees
Andrea Ferigo
Leonardo Lucio Custode
Giovanni Iacca
91
13
0
17 Aug 2022
A Visual Analytics System for Improving Attention-based Traffic
  Forecasting Models
A Visual Analytics System for Improving Attention-based Traffic Forecasting Models
Seungmin Jin
Hyunwoo Lee
Cheonbok Park
Hyeshin Chu
Yunwon Tae
Jaegul Choo
Sungahn Ko
42
15
0
08 Aug 2022
An Empirical Evaluation of Predicted Outcomes as Explanations in
  Human-AI Decision-Making
An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making
Johannes Jakubik
Jakob Schöffer
Vincent Hoge
Michael Vossing
Niklas Kühl
56
11
0
08 Aug 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAIFAttLRM
160
63
0
29 Jul 2022
A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
  Metrics
A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation Metrics
Yiqiao Li
Jianlong Zhou
Sunny Verma
Fang Chen
XAI
100
40
0
26 Jul 2022
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use
  Case-adadpted Methods and Critical Evaluation
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical Evaluation
Oliver Mey
Deniz Neufeld
60
24
0
21 Jul 2022
Constrained Prescriptive Trees via Column Generation
Constrained Prescriptive Trees via Column Generation
Shivaram Subramanian
Wei-Ju Sun
Youssef Drissi
M. Ettl
81
9
0
20 Jul 2022
Lazy Estimation of Variable Importance for Large Neural Networks
Lazy Estimation of Variable Importance for Large Neural Networks
Yue Gao
Abby Stevens
Rebecca Willett
Garvesh Raskutti
109
4
0
19 Jul 2022
Beware the Rationalization Trap! When Language Model Explainability
  Diverges from our Mental Models of Language
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language
Rita Sevastjanova
Mennatallah El-Assady
LRM
84
10
0
14 Jul 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
115
76
0
13 Jul 2022
The Mean Dimension of Neural Networks -- What causes the interaction
  effects?
The Mean Dimension of Neural Networks -- What causes the interaction effects?
Roman Hahn
Christoph Feinauer
E. Borgonovo
FAtt
48
2
0
11 Jul 2022
On Computing Relevant Features for Explaining NBCs
On Computing Relevant Features for Explaining NBCs
Yacine Izza
Sasha Rubin
97
5
0
11 Jul 2022
Local Multi-Label Explanations for Random Forest
Local Multi-Label Explanations for Random Forest
Nikolaos Mylonas
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
FAtt
42
7
0
05 Jul 2022
"Even if ..." -- Diverse Semifactual Explanations of Reject
"Even if ..." -- Diverse Semifactual Explanations of Reject
André Artelt
Barbara Hammer
73
12
0
05 Jul 2022
Comparing Feature Importance and Rule Extraction for Interpretability on
  Text Data
Comparing Feature Importance and Rule Extraction for Interpretability on Text Data
Gianluigi Lopardo
Damien Garreau
FAtt
98
1
0
04 Jul 2022
A systematic review of biologically-informed deep learning models for
  cancer: fundamental trends for encoding and interpreting oncology data
A systematic review of biologically-informed deep learning models for cancer: fundamental trends for encoding and interpreting oncology data
Magdalena Wysocka
Oskar Wysocki
Marie Zufferey
Dónal Landers
André Freitas
AI4CE
121
28
0
02 Jul 2022
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Moritz Renftle
Holger Trittenbach
M. Poznic
Reinhard Heil
ELM
67
6
0
28 Jun 2022
RES: A Robust Framework for Guiding Visual Explanation
RES: A Robust Framework for Guiding Visual Explanation
Yuyang Gao
Tong Sun
Guangji Bai
Siyi Gu
S. Hong
Liang Zhao
FAttAAMLXAI
88
33
0
27 Jun 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of
  Prediction Functions
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
138
4
0
24 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
136
147
0
22 Jun 2022
Connecting Algorithmic Research and Usage Contexts: A Perspective of
  Contextualized Evaluation for Explainable AI
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Q. V. Liao
Yunfeng Zhang
Ronny Luss
Finale Doshi-Velez
Amit Dhurandhar
147
83
0
22 Jun 2022
Stop ordering machine learning algorithms by their explainability! A
  user-centered investigation of performance and explainability
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
L. Herm
Kai Heinrich
Jonas Wanner
Christian Janiesch
38
88
0
20 Jun 2022
A Dynamic Data Driven Approach for Explainable Scene Understanding
A Dynamic Data Driven Approach for Explainable Scene Understanding
Z. Daniels
Dimitris N. Metaxas
FAtt
46
3
0
18 Jun 2022
Rectifying Mono-Label Boolean Classifiers
Rectifying Mono-Label Boolean Classifiers
S. Coste-Marquis
Pierre Marquis
69
0
0
17 Jun 2022
Previous
123...91011...212223
Next