ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
Trust in Data Science: Collaboration, Translation, and Accountability in
  Corporate Data Science Projects
Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects
Samir Passi
S. Jackson
233
115
0
09 Feb 2020
Transparency and Trust in Human-AI-Interaction: The Role of
  Model-Agnostic Explanations in Computer Vision-Based Decision Support
Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
Christian Meske
Enrico Bunde
50
7
0
04 Feb 2020
Four Principles of Explainable AI as Applied to Biometrics and Facial
  Forensic Algorithms
Four Principles of Explainable AI as Applied to Biometrics and Facial Forensic Algorithms
P. Phillips
Mark A. Przybocki
CVBM
38
14
0
03 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAMLFAttXAI
72
200
0
03 Feb 2020
Statistical stability indices for LIME: obtaining reliable explanations
  for Machine Learning models
Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models
Giorgio Visani
Enrico Bagli
F. Chesani
A. Poluzzi
D. Capuzzo
FAtt
66
170
0
31 Jan 2020
Black Box Explanation by Learning Image Exemplars in the Latent Feature
  Space
Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Riccardo Guidotti
A. Monreale
Stan Matwin
D. Pedreschi
FAtt
112
67
0
27 Jan 2020
Explainable Artificial Intelligence and Machine Learning: A reality
  rooted perspective
Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective
F. Emmert-Streib
O. Yli-Harja
M. Dehmer
37
85
0
26 Jan 2020
Explainable Active Learning (XAL): An Empirical Study of How Local
  Explanations Impact Annotator Experience
Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Klaus Mueller
94
29
0
24 Jan 2020
Evaluating Weakly Supervised Object Localization Methods Right
Evaluating Weakly Supervised Object Localization Methods Right
Junsuk Choe
Seong Joon Oh
Seungho Lee
Sanghyuk Chun
Zeynep Akata
Hyunjung Shim
WSOL
395
189
0
21 Jan 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
147
213
0
15 Jan 2020
A Formal Approach to Explainability
A Formal Approach to Explainability
Lior Wolf
Tomer Galanti
Tamir Hazan
FAttGAN
69
22
0
15 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
90
143
0
14 Jan 2020
Explaining the Explainer: A First Theoretical Analysis of LIME
Explaining the Explainer: A First Theoretical Analysis of LIME
Damien Garreau
U. V. Luxburg
FAtt
61
181
0
10 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
94
317
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
140
726
0
08 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
117
687
0
07 Jan 2020
Teaching Responsible Data Science: Charting New Pedagogical Territory
Teaching Responsible Data Science: Charting New Pedagogical Territory
Julia Stoyanovich
Armanda Lewis
49
39
0
23 Dec 2019
Exploring Interpretability for Predictive Process Analytics
Exploring Interpretability for Predictive Process Analytics
Renuka Sindhgatta
Chun Ouyang
Catarina Moreira
30
2
0
22 Dec 2019
Learning Deep Attribution Priors Based On Prior Knowledge
Learning Deep Attribution Priors Based On Prior Knowledge
Ethan Weinberger
Joseph D. Janizek
Su-In Lee
FAtt
42
1
0
20 Dec 2019
Meta Decision Trees for Explainable Recommendation Systems
Meta Decision Trees for Explainable Recommendation Systems
Eyal Shulman
Lior Wolf
27
18
0
19 Dec 2019
Analysing Deep Reinforcement Learning Agents Trained with Domain
  Randomisation
Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Tianhong Dai
Kai Arulkumaran
Tamara Gerbert
Samyakh Tukra
Feryal M. P. Behbahani
Anil Anthony Bharath
84
28
0
18 Dec 2019
Differentiable Reasoning on Large Knowledge Bases and Natural Language
Differentiable Reasoning on Large Knowledge Bases and Natural Language
Pasquale Minervini
Matko Bovsnjak
Tim Rocktaschel
Sebastian Riedel
Edward Grefenstette
LRM
116
91
0
17 Dec 2019
Balancing the Tradeoff Between Clustering Value and Interpretability
Balancing the Tradeoff Between Clustering Value and Interpretability
Sandhya Saisubramanian
Sainyam Galhotra
S. Zilberstein
72
41
0
17 Dec 2019
From Shallow to Deep Interactions Between Knowledge Representation,
  Reasoning and Machine Learning (Kay R. Amel group)
From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning (Kay R. Amel group)
Zied Bouraoui
Antoine Cornuéjols
Thierry Denoeux
Sebastien Destercke
Didier Dubois
...
Jérôme Mengin
H. Prade
Steven Schockaert
M. Serrurier
Christel Vrain
128
14
0
13 Dec 2019
Low-Cost Outdoor Air Quality Monitoring and Sensor Calibration: A Survey
  and Critical Analysis
Low-Cost Outdoor Air Quality Monitoring and Sensor Calibration: A Survey and Critical Analysis
Francesco Concas
Julien Mineraud
Eemil Lagerspetz
Samu Varjonen
Xiaoli Liu
Kai Puolamäki
Petteri Nurmi
Sasu Tarkoma
16
0
0
13 Dec 2019
Feature Relevance Determination for Ordinal Regression in the Context of
  Feature Redundancies and Privileged Information
Feature Relevance Determination for Ordinal Regression in the Context of Feature Redundancies and Privileged Information
Lukas Pfannschmidt
Jonathan Jakob
Fabian Hinder
Michael Biehl
Peter Tiño
Barbara Hammer
13
2
0
10 Dec 2019
Knowledge extraction from the learning of sequences in a long short term
  memory (LSTM) architecture
Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture
Ikram Chraibi Kaadoud
N. Rougier
F. Alexandre
16
22
0
06 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
64
1
0
02 Dec 2019
ACE -- An Anomaly Contribution Explainer for Cyber-Security Applications
ACE -- An Anomaly Contribution Explainer for Cyber-Security Applications
Xiao Zhang
Manish Marwah
I-Ta Lee
M. Arlitt
Dan Goldwasser
42
14
0
01 Dec 2019
The relationship between trust in AI and trustworthy machine learning
  technologies
The relationship between trust in AI and trustworthy machine learning technologies
Ehsan Toreini
Mhairi Aitken
Kovila P. L. Coopamootoo
Karen Elliott
Carlos Vladimiro Gonzalez Zelaya
Aad van Moorsel
FaML
85
262
0
27 Nov 2019
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAUFaMLXAIAAML
114
367
0
27 Nov 2019
A psychophysics approach for quantitative comparison of interpretable
  computer vision models
A psychophysics approach for quantitative comparison of interpretable computer vision models
F. Biessmann
D. Refiano
60
5
0
24 Nov 2019
LionForests: Local Interpretation of Random Forests
LionForests: Local Interpretation of Random Forests
Ioannis Mollas
Nick Bassiliades
I. Vlahavas
Grigorios Tsoumakas
87
12
0
20 Nov 2019
An explanation method for Siamese neural networks
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
59
15
0
18 Nov 2019
Information Bottleneck Theory on Convolutional Neural Networks
Information Bottleneck Theory on Convolutional Neural Networks
Jianing Li
Ding Liu
FAtt
52
3
0
09 Nov 2019
GRACE: Generating Concise and Informative Contrastive Sample to Explain
  Neural Network Model's Prediction
GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction
Thai V. Le
Suhang Wang
Dongwon Lee
45
1
0
05 Nov 2019
What Gets Echoed? Understanding the "Pointers" in Explanations of
  Persuasive Arguments
What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments
D. Atkinson
K. Srinivasan
Chenhao Tan
74
16
0
01 Nov 2019
Explanation by Progressive Exaggeration
Explanation by Progressive Exaggeration
Sumedha Singla
Brian Pollack
Junxiang Chen
Kayhan Batmanghelich
FAttMedIm
130
103
0
01 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
176
6,366
0
22 Oct 2019
Contextual Prediction Difference Analysis for Explaining Individual
  Image Classifications
Contextual Prediction Difference Analysis for Explaining Individual Image Classifications
Jindong Gu
Volker Tresp
FAtt
49
8
0
21 Oct 2019
Identifying the Most Explainable Classifier
Identifying the Most Explainable Classifier
Brett Mullins
FAtt
59
1
0
18 Oct 2019
Many Faces of Feature Importance: Comparing Built-in and Post-hoc
  Feature Importance in Text Classification
Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification
Vivian Lai
Zheng Jon Cai
Chenhao Tan
FAtt
57
19
0
18 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
316
307
0
17 Oct 2019
Uncertainty-aware Sensitivity Analysis Using Rényi Divergences
Uncertainty-aware Sensitivity Analysis Using Rényi Divergences
Topi Paananen
Michael Riis Andersen
Aki Vehtari
37
3
0
17 Oct 2019
Extracting Incentives from Black-Box Decisions
Extracting Incentives from Black-Box Decisions
Yonadav Shavit
William S. Moses
39
9
0
13 Oct 2019
NLS: an accurate and yet easy-to-interpret regression method
NLS: an accurate and yet easy-to-interpret regression method
Victor Coscrato
M. Inácio
T. Botari
Rafael Izbicki
FAtt
46
4
0
11 Oct 2019
Finding Interpretable Concept Spaces in Node Embeddings using Knowledge
  Bases
Finding Interpretable Concept Spaces in Node Embeddings using Knowledge Bases
Maximilian Idahl
Megha Khosla
Avishek Anand
33
10
0
11 Oct 2019
Interpreting Deep Learning-Based Networking Systems
Interpreting Deep Learning-Based Networking Systems
Zili Meng
Minhu Wang
Jia-Ju Bai
Mingwei Xu
Hongzi Mao
Hongxin Hu
AI4CE
38
3
0
09 Oct 2019
Learn to Explain Efficiently via Neural Logic Inductive Learning
Learn to Explain Efficiently via Neural Logic Inductive Learning
Yu’an Yang
Le Song
NAI
97
77
0
06 Oct 2019
REDS: Rule Extraction for Discovering Scenarios
REDS: Rule Extraction for Discovering Scenarios
Vadim Arzamasov
Klemens Böhm
23
7
0
03 Oct 2019
Previous
123...1920212223
Next