ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXivPDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 419 papers shown
Title
Model Explanations with Differential Privacy
Model Explanations with Differential Privacy
Neel Patel
Reza Shokri
Yair Zick
SILM
FedML
28
32
0
16 Jun 2020
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
Giorgio Visani
Enrico Bagli
F. Chesani
FAtt
27
60
0
10 Jun 2020
Principles to Practices for Responsible AI: Closing the Gap
Principles to Practices for Responsible AI: Closing the Gap
Daniel S. Schiff
B. Rakova
A. Ayesh
Anat Fanti
M. Lennon
19
87
0
08 Jun 2020
Model-agnostic Feature Importance and Effects with Dependent Features --
  A Conditional Subgroup Approach
Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach
Christoph Molnar
Gunnar Konig
B. Bischl
Giuseppe Casalicchio
31
77
0
08 Jun 2020
A Performance-Explainability Framework to Benchmark Machine Learning
  Methods: Application to Multivariate Time Series Classifiers
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel
Véronique Masson
Elisa Fromont
AI4TS
44
17
0
29 May 2020
Explainable Matrix -- Visualization for Global and Local
  Interpretability of Random Forest Classification Ensembles
Explainable Matrix -- Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Mário Popolin Neto
F. Paulovich
FAtt
33
88
0
08 May 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
13
137
0
05 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
15
17
0
05 May 2020
Post-hoc explanation of black-box classifiers using confident itemsets
Post-hoc explanation of black-box classifiers using confident itemsets
M. Moradi
Matthias Samwald
57
98
0
05 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Learning a Formula of Interpretability to Learn Interpretable Formulas
Learning a Formula of Interpretability to Learn Interpretable Formulas
M. Virgolin
A. D. Lorenzo
Eric Medvet
Francesca Randone
16
33
0
23 Apr 2020
Born-Again Tree Ensembles
Born-Again Tree Ensembles
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
62
53
0
24 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
Self-Supervised Discovering of Interpretable Features for Reinforcement
  Learning
Self-Supervised Discovering of Interpretable Features for Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Zhuoyuan Wang
Tingyu Lin
Cheng Wu
SSL
28
18
0
16 Mar 2020
ViCE: Visual Counterfactual Explanations for Machine Learning Models
ViCE: Visual Counterfactual Explanations for Machine Learning Models
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
57
93
0
05 Mar 2020
Testing Monotonicity of Machine Learning Models
Testing Monotonicity of Machine Learning Models
Arnab Sharma
Heike Wehrheim
6
8
0
27 Feb 2020
Better Classifier Calibration for Small Data Sets
Better Classifier Calibration for Small Data Sets
Alasalmi Tuomo
Jaakko Suutala
Heli Koskimäki
J. Röning
13
9
0
24 Feb 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
13
190
0
22 Feb 2020
AI safety: state of the field through quantitative lens
AI safety: state of the field through quantitative lens
Mislav Juric
A. Sandic
Mario Brčič
25
24
0
12 Feb 2020
Convex Density Constraints for Computing Plausible Counterfactual
  Explanations
Convex Density Constraints for Computing Plausible Counterfactual Explanations
André Artelt
Barbara Hammer
19
47
0
12 Feb 2020
Transparency and Trust in Human-AI-Interaction: The Role of
  Model-Agnostic Explanations in Computer Vision-Based Decision Support
Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
Christian Meske
Enrico Bunde
19
7
0
04 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
27
197
0
03 Feb 2020
Black Box Explanation by Learning Image Exemplars in the Latent Feature
  Space
Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
Riccardo Guidotti
A. Monreale
Stan Matwin
D. Pedreschi
FAtt
14
67
0
27 Jan 2020
Evaluating Weakly Supervised Object Localization Methods Right
Evaluating Weakly Supervised Object Localization Methods Right
Junsuk Choe
Seong Joon Oh
Seungho Lee
Sanghyuk Chun
Zeynep Akata
Hyunjung Shim
WSOL
300
186
0
21 Jan 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
37
207
0
15 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
35
138
0
14 Jan 2020
Explaining the Explainer: A First Theoretical Analysis of LIME
Explaining the Explainer: A First Theoretical Analysis of LIME
Damien Garreau
U. V. Luxburg
FAtt
9
172
0
10 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
28
661
0
07 Jan 2020
Exploring Interpretability for Predictive Process Analytics
Exploring Interpretability for Predictive Process Analytics
Renuka Sindhgatta
Chun Ouyang
Catarina Moreira
8
2
0
22 Dec 2019
Differentiable Reasoning on Large Knowledge Bases and Natural Language
Differentiable Reasoning on Large Knowledge Bases and Natural Language
Pasquale Minervini
Matko Bovsnjak
Tim Rocktaschel
Sebastian Riedel
Edward Grefenstette
LRM
18
88
0
17 Dec 2019
Balancing the Tradeoff Between Clustering Value and Interpretability
Balancing the Tradeoff Between Clustering Value and Interpretability
Sandhya Saisubramanian
Sainyam Galhotra
S. Zilberstein
24
40
0
17 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
12
1
0
02 Dec 2019
LionForests: Local Interpretation of Random Forests
LionForests: Local Interpretation of Random Forests
Ioannis Mollas
Nick Bassiliades
I. Vlahavas
Grigorios Tsoumakas
19
12
0
20 Nov 2019
An explanation method for Siamese neural networks
An explanation method for Siamese neural networks
Lev V. Utkin
M. Kovalev
E. Kasimov
19
14
0
18 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,111
0
22 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Uncertainty-aware Sensitivity Analysis Using Rényi Divergences
Uncertainty-aware Sensitivity Analysis Using Rényi Divergences
Topi Paananen
Michael Riis Andersen
Aki Vehtari
19
3
0
17 Oct 2019
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in
  Toxic Comment Detection
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection
Ameya Vaidya
Feng Mai
Yue Ning
115
21
0
21 Sep 2019
LoRMIkA: Local rule-based model interpretability with k-optimal
  associations
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray L. Buntine
35
31
0
11 Aug 2019
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask
  Learning
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning
M. Alzantot
Amy Widdicombe
S. Julier
Mani B. Srivastava
AAML
FAtt
15
3
0
05 Aug 2019
A Factored Generalized Additive Model for Clinical Decision Support in
  the Operating Room
A Factored Generalized Additive Model for Clinical Decision Support in the Operating Room
Zhicheng Cui
Bradley A. Fritz
C. King
M. Avidan
Yixin Chen
22
13
0
29 Jul 2019
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy
  using Interpretable Deep Reinforcement Attention Networks
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks
Jingyuan Wang
Yang Zhang
Ke Tang
Junjie Wu
Zhang Xiong
AIFin
16
119
0
24 Jul 2019
Global Aggregations of Local Explanations for Black Box models
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
21
63
0
05 Jul 2019
On Explaining Machine Learning Models by Evolving Crucial and Compact
  Features
On Explaining Machine Learning Models by Evolving Crucial and Compact Features
M. Virgolin
Tanja Alderliesten
Peter A. N. Bosman
11
28
0
04 Jul 2019
Issues with post-hoc counterfactual explanations: a discussion
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
107
44
0
11 Jun 2019
An Information Theoretic Interpretation to Deep Neural Networks
An Information Theoretic Interpretation to Deep Neural Networks
Shao-Lun Huang
Xiangxiang Xu
Lizhong Zheng
G. Wornell
FAtt
22
41
0
16 May 2019
"Why did you do that?": Explaining black box models with Inductive
  Synthesis
"Why did you do that?": Explaining black box models with Inductive Synthesis
Görkem Paçaci
David Johnson
S. McKeever
A. Hamfelt
15
6
0
17 Apr 2019
Explainability in Human-Agent Systems
Explainability in Human-Agent Systems
A. Rosenfeld
A. Richardson
XAI
27
203
0
17 Apr 2019
Previous
123456789
Next