ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and
  Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
55
2
0
15 Jun 2022
A Methodology and Software Architecture to Support
  Explainability-by-Design
A Methodology and Software Architecture to Support Explainability-by-Design
T. D. Huynh
Niko Tsakalakis
Ayah Helal
Sophie Stalla-Bourdillon
Luc Moreau
55
5
0
13 Jun 2022
Efficient Human-in-the-loop System for Guiding DNNs Attention
Efficient Human-in-the-loop System for Guiding DNNs Attention
Yi He
Xi Yang
Chia-Ming Chang
Haoran Xie
Takeo Igarashi
87
8
0
13 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
123
13
0
09 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
94
37
0
08 Jun 2022
Balanced background and explanation data are needed in explaining deep
  learning models with SHAP: An empirical study on clinical decision making
Balanced background and explanation data are needed in explaining deep learning models with SHAP: An empirical study on clinical decision making
Mingxuan Liu
Yilin Ning
Han Yuan
M. Ong
Nan Liu
FAtt
43
1
0
08 Jun 2022
Towards Explainable Social Agent Authoring tools: A case study on
  FAtiMA-Toolkit
Towards Explainable Social Agent Authoring tools: A case study on FAtiMA-Toolkit
Manuel Guimarães
Joana Campos
Pedro A. Santos
João Dias
R. Prada
10
1
0
07 Jun 2022
Explainable Artificial Intelligence (XAI) for Internet of Things: A
  Survey
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
İbrahim Kök
Feyza Yıldırım Okay
Özgecan Muyanlı
S. Özdemir
XAI
74
55
0
07 Jun 2022
GRETEL: A unified framework for Graph Counterfactual Explanation
  Evaluation
GRETEL: A unified framework for Graph Counterfactual Explanation Evaluation
Mario Alfonso Prado-Romero
Giovanni Stilo
50
16
0
07 Jun 2022
Towards Responsible AI for Financial Transactions
Towards Responsible AI for Financial Transactions
Charl Maree
Jan Erik Modal
C. Omlin
AAML
105
17
0
06 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
124
20
0
31 May 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
189
69
0
31 May 2022
Fool SHAP with Stealthily Biased Sampling
Fool SHAP with Stealthily Biased Sampling
Gabriel Laberge
Ulrich Aïvodji
Satoshi Hara
M. Marchand
Foutse Khomh
MLAUAAMLFAtt
46
2
0
30 May 2022
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model
  Behavior
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Eldar David Abraham
Karel DÓosterlinck
Amir Feder
Y. Gat
Atticus Geiger
Christopher Potts
Roi Reichart
Zhengxuan Wu
CML
122
47
0
27 May 2022
Interpretation Quality Score for Measuring the Quality of
  interpretability methods
Interpretation Quality Score for Measuring the Quality of interpretability methods
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
XAI
111
5
0
24 May 2022
Explaining Causal Models with Argumentation: the Case of Bi-variate
  Reinforcement
Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement
Antonio Rago
P. Baroni
Francesca Toni
CML
73
5
0
23 May 2022
Explanatory machine learning for sequential human teaching
Explanatory machine learning for sequential human teaching
L. Ai
Johannes Langer
Stephen Muggleton
Ute Schmid
100
5
0
20 May 2022
On Tackling Explanation Redundancy in Decision Trees
On Tackling Explanation Redundancy in Decision Trees
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
100
64
0
20 May 2022
FIND:Explainable Framework for Meta-learning
FIND:Explainable Framework for Meta-learning
Xinyue Shao
Hongzhi Wang
Xiao-Wen Zhu
Feng Xiong
FedML
39
2
0
20 May 2022
Provably Precise, Succinct and Efficient Explanations for Decision Trees
Provably Precise, Succinct and Efficient Explanations for Decision Trees
Yacine Izza
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
75
8
0
19 May 2022
One Explanation to Rule them All -- Ensemble Consistent Explanations
One Explanation to Rule them All -- Ensemble Consistent Explanations
André Artelt
Stelios G. Vrachimis
Demetrios G. Eliades
Marios M. Polycarpou
Barbara Hammer
35
7
0
18 May 2022
A Psychological Theory of Explainability
A Psychological Theory of Explainability
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
XAIFAtt
97
17
0
17 May 2022
Is explainable AI a race against model complexity?
Is explainable AI a race against model complexity?
Advait Sarkar
LRM
100
15
0
17 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
92
58
0
15 May 2022
Fairness and Explainability in Automatic Decision-Making Systems. A
  challenge for computer science and law
Fairness and Explainability in Automatic Decision-Making Systems. A challenge for computer science and law
Thierry Kirat
Olivia Tambou
Virginie Do
A. Tsoukiás
FaML
52
15
0
14 May 2022
Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty
Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty
Andrew Fuchs
A. Passarella
M. Conti
73
4
0
13 May 2022
"There Is Not Enough Information": On the Effects of Explanations on
  Perceptions of Informational Fairness and Trustworthiness in Automated
  Decision-Making
"There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making
Jakob Schoeffer
Niklas Kuehl
Yvette Machowski
FaML
72
59
0
11 May 2022
Keep Your Friends Close and Your Counterfactuals Closer: Improved
  Learning From Closest Rather Than Plausible Counterfactual Explanations in an
  Abstract Setting
Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract Setting
Ulrike Kuhl
André Artelt
Barbara Hammer
81
27
0
11 May 2022
"If it didn't happen, why would I change my decision?": How Judges
  Respond to Counterfactual Explanations for the Public Safety Assessment
"If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
Yaniv Yacoby
Ben Green
Christopher L. Griffin
Finale Doshi Velez
87
17
0
11 May 2022
Lifelong Personal Context Recognition
Lifelong Personal Context Recognition
A. Bontempelli
Marcelo D. Rodas-Brítez
Xiaoyue Li
Haonan Zhao
L. Erculiani
Stefano Teso
Andrea Passerini
Fausto Giunchiglia
139
5
0
10 May 2022
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study
  in Hate Speech Detection
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
FAtt
69
27
0
06 May 2022
Tell Me Something That Will Help Me Trust You: A Survey of Trust
  Calibration in Human-Agent Interaction
Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction
G. Cancro
Shimei Pan
James R. Foulds
47
2
0
06 May 2022
Interactive Model Cards: A Human-Centered Approach to Model
  Documentation
Interactive Model Cards: A Human-Centered Approach to Model Documentation
Anamaria Crisan
Margaret Drouhard
Jesse Vig
Nazneen Rajani
HAI
83
92
0
05 May 2022
Evaluating Deep Taylor Decomposition for Reliability Assessment in the
  Wild
Evaluating Deep Taylor Decomposition for Reliability Assessment in the Wild
Stephanie Brandl
Daniel Hershcovich
Anders Søgaard
40
2
0
03 May 2022
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT
  Security
TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security
Maede Zolanvari
Zebo Yang
K. Khan
Rajkumar Jain
N. Meskin
61
79
0
02 May 2022
Designing for Responsible Trust in AI Systems: A Communication
  Perspective
Designing for Responsible Trust in AI Systems: A Communication Perspective
Q. V. Liao
S. Sundar
73
109
0
29 Apr 2022
Standardized Evaluation of Machine Learning Methods for Evolving Data
  Streams
Standardized Evaluation of Machine Learning Methods for Evolving Data Streams
Johannes Haug
Effi Tramountani
Gjergji Kasneci
39
5
0
28 Apr 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
99
137
0
25 Apr 2022
Integrating Prior Knowledge in Post-hoc Explanations
Integrating Prior Knowledge in Post-hoc Explanations
Adulam Jeyasothy
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
31
6
0
25 Apr 2022
Computing the Collection of Good Models for Rule Lists
Computing the Collection of Good Models for Rule Lists
Kota Mata
Kentaro Kanamori
Hiroki Arimura
111
9
0
24 Apr 2022
Data Debugging with Shapley Importance over End-to-End Machine Learning
  Pipelines
Data Debugging with Shapley Importance over End-to-End Machine Learning Pipelines
Bojan Karlavs
David Dao
Matteo Interlandi
Yue Liu
Sebastian Schelter
Wentao Wu
Ce Zhang
TDI
67
27
0
23 Apr 2022
Exploring Hidden Semantics in Neural Networks with Symbolic Regression
Exploring Hidden Semantics in Neural Networks with Symbolic Regression
Yuanzhen Luo
Qiang Lu
Xilei Hu
Jake Luo
Zhiguang Wang
57
0
0
22 Apr 2022
Features of Explainability: How users understand counterfactual and
  causal explanations for categorical and continuous features in XAI
Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI
Greta Warren
Mark T. Keane
R. Byrne
CML
75
22
0
21 Apr 2022
Ordinal-ResLogit: Interpretable Deep Residual Neural Networks for
  Ordered Choices
Ordinal-ResLogit: Interpretable Deep Residual Neural Networks for Ordered Choices
K. Kamal
Bilal Farooq
32
4
0
20 Apr 2022
Recurrent neural networks that generalize from examples and optimize by
  dreaming
Recurrent neural networks that generalize from examples and optimize by dreaming
Miriam Aquaro
Francesco Alemanno
Ido Kanter
Fabrizio Durante
E. Agliari
Adriano Barra
CLL
89
5
0
17 Apr 2022
A Set Membership Approach to Discovering Feature Relevance and
  Explaining Neural Classifier Decisions
A Set Membership Approach to Discovering Feature Relevance and Explaining Neural Classifier Decisions
S. P. Adam
A. Likas
25
0
0
05 Apr 2022
On Explaining Multimodal Hateful Meme Detection Models
On Explaining Multimodal Hateful Meme Detection Models
Ming Shan Hee
Roy Ka-wei Lee
Wen-Haw Chong
VLM
107
41
0
04 Apr 2022
Concept Evolution in Deep Learning Training: A Unified Interpretation
  Framework and Discoveries
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Wenliang Li
Judy Hoffman
Duen Horng Chau
71
2
0
30 Mar 2022
BELLATREX: Building Explanations through a LocaLly AccuraTe Rule
  EXtractor
BELLATREX: Building Explanations through a LocaLly AccuraTe Rule EXtractor
Klest Dedja
F. Nakano
Konstantinos Pliakos
C. Vens
66
5
0
29 Mar 2022
User Driven Model Adjustment via Boolean Rule Explanations
User Driven Model Adjustment via Boolean Rule Explanations
Elizabeth M. Daly
Massimiliano Mattetti
Öznur Alkan
Rahul Nair
AAML
41
11
0
28 Mar 2022
Previous
123...101112...212223
Next