ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
The Bouncer Problem: Challenges to Remote Explainability
The Bouncer Problem: Challenges to Remote Explainability
Erwan Le Merrer
Gilles Tredan
63
8
0
03 Oct 2019
Leveraging Model Interpretability and Stability to increase Model
  Robustness
Leveraging Model Interpretability and Stability to increase Model Robustness
Leilei Gan
T. Michel
Alexandre Briot
AAMLFAtt
27
1
0
01 Oct 2019
Sampling the "Inverse Set" of a Neuron: An Approach to Understanding
  Neural Nets
Sampling the "Inverse Set" of a Neuron: An Approach to Understanding Neural Nets
Suryabhan Singh Hada
M. A. Carreira-Perpiñán
BDL
23
8
0
27 Sep 2019
Adversarial ML Attack on Self Organizing Cellular Networks
Adversarial ML Attack on Self Organizing Cellular Networks
Salah-ud-din Farooq
Muhammad Usama
Junaid Qadir
M. Imran
AAML
18
3
0
26 Sep 2019
Towards Explainability for a Civilian UAV Fleet Management using an
  Agent-based Approach
Towards Explainability for a Civilian UAV Fleet Management using an Agent-based Approach
Yazan Mualla
A. Najjar
T. Kampik
I. Tchappi
Stéphane Galland
Christophe Nicolle
19
4
0
22 Sep 2019
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in
  Toxic Comment Detection
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection
Ameya Vaidya
Feng Mai
Yue Ning
160
21
0
21 Sep 2019
Towards Explainable Neural-Symbolic Visual Reasoning
Towards Explainable Neural-Symbolic Visual Reasoning
Adrien Bennetot
J. Laurent
Raja Chatila
Natalia Díaz Rodríguez
XAI
32
1
0
19 Sep 2019
Class Feature Pyramids for Video Explanation
Class Feature Pyramids for Video Explanation
Alexandros Stergiou
G. Kapidis
Grigorios Kalliatakis
C. Chrysoulas
R. Poppe
R. Veltkamp
FAtt
54
19
0
18 Sep 2019
Towards a Rigorous Evaluation of XAI Methods on Time Series
Towards a Rigorous Evaluation of XAI Methods on Time Series
U. Schlegel
Hiba Arnout
Mennatallah El-Assady
Daniela Oelke
Daniel A. Keim
XAIAI4TS
114
174
0
16 Sep 2019
Towards A Robot Explanation System: A Survey and Our Approach to State
  Summarization, Storage and Querying, and Human Interface
Towards A Robot Explanation System: A Survey and Our Approach to State Summarization, Storage and Querying, and Human Interface
Zhao Han
Jordan Allspaw
Adam Norton
Holly Yanco
28
5
0
13 Sep 2019
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer
  Representations
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations
Betty van Aken
B. Winter
Alexander Loser
Felix Alexander Gers
78
154
0
11 Sep 2019
Pluggable Social Artificial Intelligence for Enabling Human-Agent
  Teaming
Pluggable Social Artificial Intelligence for Enabling Human-Agent Teaming
J. Diggelen
Jonathan Barnhoorn
Marieke M. M. Peeters
Wessel van Staal
M. L. Stolk
B. Vecht
J. V. D. Waa
J. Schraagen
LLMAG
17
15
0
10 Sep 2019
Improving the Explainability of Neural Sentiment Classifiers via Data
  Augmentation
Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Hanjie Chen
Yangfeng Ji
24
10
0
10 Sep 2019
Learning Fair Rule Lists
Learning Fair Rule Lists
Ulrich Aïvodji
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
61
11
0
09 Sep 2019
Machine learning for automatic construction of pseudo-realistic
  pediatric abdominal phantoms
Machine learning for automatic construction of pseudo-realistic pediatric abdominal phantoms
M. Virgolin
Ziyuan Wang
Tanja Alderliesten
Peter A. N. Bosman
24
2
0
09 Sep 2019
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
76
392
0
06 Sep 2019
Personalization of Deep Learning
Personalization of Deep Learning
Johannes Schneider
M. Vlachos
103
37
0
06 Sep 2019
Machine learning algorithms to infer trait-matching and predict species
  interactions in ecological networks
Machine learning algorithms to infer trait-matching and predict species interactions in ecological networks
Maximilian Pichler
V. Boreux
A. Klein
M. Schleuning
F. Hartig
42
101
0
26 Aug 2019
Computing Linear Restrictions of Neural Networks
Computing Linear Restrictions of Neural Networks
Matthew Sotoudeh
Aditya V. Thakur
41
24
0
17 Aug 2019
LoRMIkA: Local rule-based model interpretability with k-optimal
  associations
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray Buntine
90
31
0
11 Aug 2019
Measurable Counterfactual Local Explanations for Any Classifier
Measurable Counterfactual Local Explanations for Any Classifier
Adam White
Artur Garcez
FAtt
73
98
0
08 Aug 2019
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask
  Learning
NeuroMask: Explaining Predictions of Deep Neural Networks through Mask Learning
M. Alzantot
Amy Widdicombe
S. Julier
Mani B. Srivastava
AAMLFAtt
49
3
0
05 Aug 2019
Efficient computation of counterfactual explanations of LVQ models
Efficient computation of counterfactual explanations of LVQ models
André Artelt
Barbara Hammer
54
16
0
02 Aug 2019
A Factored Generalized Additive Model for Clinical Decision Support in
  the Operating Room
A Factored Generalized Additive Model for Clinical Decision Support in the Operating Room
Zhicheng Cui
Bradley A. Fritz
C. King
M. Avidan
Yixin Chen
50
14
0
29 Jul 2019
explAIner: A Visual Analytics Framework for Interactive and Explainable
  Machine Learning
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
Thilo Spinner
U. Schlegel
H. Schäfer
Mennatallah El-Assady
HAI
76
239
0
29 Jul 2019
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy
  using Interpretable Deep Reinforcement Attention Networks
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks
Jingyuan Wang
Yang Zhang
Ke Tang
Junjie Wu
Zhang Xiong
AIFin
55
121
0
24 Jul 2019
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual
  Explanations
The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
X. Renard
Marcin Detyniecki
102
197
0
22 Jul 2019
Minimizing the expected value of the asymmetric loss and an inequality
  of the variance of the loss
Minimizing the expected value of the asymmetric loss and an inequality of the variance of the loss
Naoya Yamaguchi
Yuka Yamaguchi
R. Nishii
10
2
0
18 Jul 2019
Why Does My Model Fail? Contrastive Local Explanations for Retail
  Forecasting
Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting
Ana Lucic
H. Haned
Maarten de Rijke
68
64
0
17 Jul 2019
Global Aggregations of Local Explanations for Black Box models
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
71
66
0
05 Jul 2019
Explaining Predictions from Tree-based Boosting Ensembles
Explaining Predictions from Tree-based Boosting Ensembles
Ana Lucic
H. Haned
Maarten de Rijke
FAtt
46
5
0
04 Jul 2019
Consistent Regression using Data-Dependent Coverings
Consistent Regression using Data-Dependent Coverings
Vincent Margot
Jean-Patrick Baudry
Frédéric Guilloux
Olivier Wintenberger
52
5
0
04 Jul 2019
On Explaining Machine Learning Models by Evolving Crucial and Compact
  Features
On Explaining Machine Learning Models by Evolving Crucial and Compact Features
M. Virgolin
Tanja Alderliesten
Peter A. N. Bosman
70
28
0
04 Jul 2019
Model Bridging: Connection between Simulation Model and Neural Network
Model Bridging: Connection between Simulation Model and Neural Network
Keiichi Kisamori
Keisuke Yamazaki
Yuto Komori
Hiroshi Tokieda
41
1
0
22 Jun 2019
Disentangling Influence: Using Disentangled Representations to Audit
  Model Predictions
Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Charles Marx
R. L. Phillips
Sorelle A. Friedler
C. Scheidegger
Suresh Venkatasubramanian
TDICMLMLAU
67
27
0
20 Jun 2019
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial
  Neural Networks
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks
R. Confalonieri
Tillman Weyde
Tarek R. Besold
Fermín Moscoso del Prado Martín
58
24
0
19 Jun 2019
MoËT: Mixture of Expert Trees and its Application to Verifiable
  Reinforcement Learning
MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning
Marko Vasic
Andrija Petrović
Kaiyuan Wang
Mladen Nikolic
Rishabh Singh
S. Khurshid
OffRLMoE
96
25
0
16 Jun 2019
LioNets: Local Interpretation of Neural Networks through Penultimate
  Layer Decoding
LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding
Ioannis Mollas
Nikolaos Bassiliades
Grigorios Tsoumakas
49
12
0
15 Jun 2019
NLProlog: Reasoning with Weak Unification for Question Answering in
  Natural Language
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language
Leon Weber
Pasquale Minervini
Jannes Munchmeyer
Ulf Leser
Tim Rocktaschel
NAILRM
59
96
0
14 Jun 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safety
David Leslie
FaMLAI4TS
74
363
0
11 Jun 2019
Issues with post-hoc counterfactual explanations: a discussion
Issues with post-hoc counterfactual explanations: a discussion
Thibault Laugel
Marie-Jeanne Lesot
Christophe Marsala
Marcin Detyniecki
CML
139
45
0
11 Jun 2019
Proposed Guidelines for the Responsible Use of Explainable Machine
  Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Patrick Hall
Navdeep Gill
N. Schmidt
SILMXAIFaML
77
29
0
08 Jun 2019
Concept Tree: High-Level Representation of Variables for More
  Interpretable Surrogate Decision Trees
Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees
X. Renard
Nicolas Woloszko
Jonathan Aigrain
Marcin Detyniecki
45
11
0
04 Jun 2019
Learning Interpretable Shapelets for Time Series Classification through
  Adversarial Regularization
Learning Interpretable Shapelets for Time Series Classification through Adversarial Regularization
Yichang Wang
Rémi Emonet
Elisa Fromont
S. Malinowski
Etienne Ménager
Loic Mosser
R. Tavenard
AI4TS
43
12
0
03 Jun 2019
Interpreting a Recurrent Neural Network's Predictions of ICU Mortality
  Risk
Interpreting a Recurrent Neural Network's Predictions of ICU Mortality Risk
L. Ho
M. Aczon
D. Ledbetter
R. Wetzel
35
3
0
23 May 2019
Computationally Efficient Feature Significance and Importance for
  Machine Learning Models
Computationally Efficient Feature Significance and Importance for Machine Learning Models
Enguerrand Horel
K. Giesecke
FAtt
55
9
0
23 May 2019
Neural-Symbolic Argumentation Mining: an Argument in Favor of Deep
  Learning and Reasoning
Neural-Symbolic Argumentation Mining: an Argument in Favor of Deep Learning and Reasoning
Andrea Galassi
Kristian Kersting
Marco Lippi
Xiaoting Shao
Paolo Torroni
NAI
101
15
0
22 May 2019
Explainable Machine Learning for Scientific Insights and Discoveries
Explainable Machine Learning for Scientific Insights and Discoveries
R. Roscher
B. Bohn
Marco F. Duarte
Jochen Garcke
XAI
120
677
0
21 May 2019
The Twin-System Approach as One Generic Solution for XAI: An Overview of
  ANN-CBR Twins for Explaining Deep Learning
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
Mark T. Keane
Eoin M. Kenny
81
13
0
20 May 2019
CERTIFAI: Counterfactual Explanations for Robustness, Transparency,
  Interpretability, and Fairness of Artificial Intelligence models
CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models
Shubham Sharma
Jette Henderson
Joydeep Ghosh
85
87
0
20 May 2019
Previous
123...20212223
Next