ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models
v1v2v3 (latest)

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXiv (abs)PDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 1,104 papers shown
Title
Deep Learning for Android Malware Defenses: a Systematic Literature
  Review
Deep Learning for Android Malware Defenses: a Systematic Literature Review
Yue Liu
Chakkrit Tantithamthavorn
Li Li
Yepang Liu
AAML
88
81
0
09 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
97
223
0
09 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
165
179
0
07 Mar 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
59
21
0
04 Mar 2021
Evaluating Robustness of Counterfactual Explanations
Evaluating Robustness of Counterfactual Explanations
André Artelt
Valerie Vaquet
Riza Velioglu
Fabian Hinder
Johannes Brinkrolf
M. Schilling
Barbara Hammer
128
46
0
03 Mar 2021
Interpretable Multi-Modal Hate Speech Detection
Interpretable Multi-Modal Hate Speech Detection
Prashanth Vijayaraghavan
Hugo Larochelle
D. Roy
48
36
0
02 Mar 2021
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient
  Algorithms
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms
Miguel Á. Carreira-Perpiñán
Suryabhan Singh Hada
CMLAAML
52
35
0
01 Mar 2021
Visualizing Rule Sets: Exploration and Validation of a Design Space
Visualizing Rule Sets: Exploration and Validation of a Design Space
Jun Yuan
O. Nov
E. Bertini
44
1
0
01 Mar 2021
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
  Artificial Intelligence
Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence
Atoosa Kasirzadeh
70
24
0
01 Mar 2021
Benchmarking and Survey of Explanation Methods for Black Box Models
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
123
234
0
25 Feb 2021
HiPaR: Hierarchical Pattern-aided Regression
HiPaR: Hierarchical Pattern-aided Regression
Luis Galárraga
Olivier Pelgrin
Alexandre Termier
19
1
0
24 Feb 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural
  Language Processing
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing
Sarah Wiegreffe
Ana Marasović
XAI
93
146
0
24 Feb 2021
DNN2LR: Automatic Feature Crossing for Credit Scoring
DNN2LR: Automatic Feature Crossing for Credit Scoring
Qiang Liu
Zhaocheng Liu
Haoli Zhang
Yuntian Chen
Jun Zhu
33
0
0
24 Feb 2021
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Ginevra Carbone
G. Sanguinetti
Luca Bortolussi
FAttAAML
74
4
0
22 Feb 2021
SQAPlanner: Generating Data-Informed Software Quality Improvement Plans
SQAPlanner: Generating Data-Informed Software Quality Improvement Plans
Dilini Sewwandi Rajapaksha
Chakkrit Tantithamthavorn
Jirayus Jiarpakdee
Christoph Bergmeir
J. Grundy
Wray Buntine
80
35
0
19 Feb 2021
Intuitively Assessing ML Model Reliability through Example-Based
  Explanations and Editing Model Inputs
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Harini Suresh
Kathleen M. Lewis
John Guttag
Arvind Satyanarayan
FAtt
93
26
0
17 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
125
432
0
15 Feb 2021
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep
  Convolutional Networks via Integrated Gradient-Based Scoring
Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring
S. Sattarzadeh
M. Sudhakar
Konstantinos N. Plataniotis
Jongseong Jang
Yeonjeong Jeong
Hyunwoo J. Kim
FAtt
59
39
0
15 Feb 2021
Deep Co-Attention Network for Multi-View Subspace Learning
Deep Co-Attention Network for Multi-View Subspace Learning
Lecheng Zheng
Y. Cheng
Hongxia Yang
Nan Cao
Jingrui He
60
33
0
15 Feb 2021
What does LIME really see in images?
What does LIME really see in images?
Damien Garreau
Dina Mardaoui
FAtt
64
40
0
11 Feb 2021
Rationally Inattentive Utility Maximization for Interpretable Deep Image
  Classification
Rationally Inattentive Utility Maximization for Interpretable Deep Image Classification
Kunal Pattanayak
Vikram Krishnamurthy
19
0
0
09 Feb 2021
Towards a mathematical framework to inform Neural Network modelling via
  Polynomial Regression
Towards a mathematical framework to inform Neural Network modelling via Polynomial Regression
Pablo Morala
Jenny Alexandra Cifuentes
R. Lillo
Iñaki Ucar
83
34
0
07 Feb 2021
Bandits for Learning to Explain from Explanations
Bandits for Learning to Explain from Explanations
Freya Behrens
Stefano Teso
Davide Mottin
FAtt
41
1
0
07 Feb 2021
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Ana Lucic
Maartje ter Hoeve
Gabriele Tolomei
Maarten de Rijke
Fabrizio Silvestri
213
146
0
05 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
111
26
0
04 Feb 2021
Directive Explanations for Actionable Explainability in Machine Learning
  Applications
Directive Explanations for Actionable Explainability in Machine Learning Applications
Ronal Singh
Paul Dourish
Piers Howe
Tim Miller
L. Sonenberg
Eduardo Velloso
F. Vetere
40
35
0
03 Feb 2021
A Survey on Understanding, Visualizations, and Explanation of Deep
  Neural Networks
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
Atefeh Shahroudnejad
FaMLAAMLAI4CEXAI
119
36
0
02 Feb 2021
Designing AI for Trust and Collaboration in Time-Constrained Medical
  Decisions: A Sociotechnical Lens
Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens
Maia L. Jacobs
Jeffrey He
Melanie F. Pradier
Barbara D. Lam
Andrew C Ahn
T. McCoy
R. Perlis
Finale Doshi-Velez
Krzysztof Z. Gajos
120
149
0
01 Feb 2021
Explaining Natural Language Processing Classifiers with Occlusion and
  Language Modeling
Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling
David Harbecke
AAML
51
2
0
28 Jan 2021
Reviewable Automated Decision-Making: A Framework for Accountable
  Algorithmic Systems
Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
Jennifer Cobbe
M. S. Lee
Jatinder Singh
61
77
0
26 Jan 2021
A Few Good Counterfactuals: Generating Interpretable, Plausible and
  Diverse Counterfactual Explanations
A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations
Barry Smyth
Mark T. Keane
CML
96
27
0
22 Jan 2021
GLocalX -- From Local to Global Explanations of Black Box AI Models
GLocalX -- From Local to Global Explanations of Black Box AI Models
Mattia Setzu
Riccardo Guidotti
A. Monreale
Franco Turini
D. Pedreschi
F. Giannotti
105
121
0
19 Jan 2021
Towards interpreting ML-based automated malware detection models: a
  survey
Towards interpreting ML-based automated malware detection models: a survey
Yuzhou Lin
Xiaolin Chang
124
7
0
15 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
186
177
0
13 Jan 2021
Towards Interpretable Ensemble Learning for Image-based Malware
  Detection
Towards Interpretable Ensemble Learning for Image-based Malware Detection
Yuzhou Lin
Xiaolin Chang
AAML
65
8
0
13 Jan 2021
How Much Automation Does a Data Scientist Want?
How Much Automation Does a Data Scientist Want?
Dakuo Wang
Q. V. Liao
Yunfeng Zhang
Udayan Khurana
Horst Samulowitz
Soya Park
Michael J. Muller
Lisa Amini
AI4CE
75
56
0
07 Jan 2021
Explainable AI and Adoption of Financial Algorithmic Advisors: an
  Experimental Study
Explainable AI and Adoption of Financial Algorithmic Advisors: an Experimental Study
D. David
Yehezkel S. Resheff
Talia Tron
44
24
0
05 Jan 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaMLXAI
209
689
0
28 Dec 2020
Weighted defeasible knowledge bases and a multipreference semantics for
  a deep neural network model
Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model
Laura Giordano
Daniele Theseider Dupré
71
35
0
24 Dec 2020
On Relating 'Why?' and 'Why Not?' Explanations
On Relating 'Why?' and 'Why Not?' Explanations
Alexey Ignatiev
Nina Narodytska
Nicholas M. Asher
Sasha Rubin
XAIFAttLRM
59
26
0
21 Dec 2020
Explaining Black-box Models for Biomedical Text Classification
Explaining Black-box Models for Biomedical Text Classification
M. Moradi
Matthias Samwald
74
21
0
20 Dec 2020
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from
  Practice to Theory
XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory
Nazanin Fouladgar
Kary Främling
XAI
39
4
0
17 Dec 2020
On Exploiting Hitting Sets for Model Reconciliation
On Exploiting Hitting Sets for Model Reconciliation
Stylianos Loukas Vasileiou
Alessandro Previti
William Yeoh
41
26
0
16 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
115
157
0
14 Dec 2020
Explanation from Specification
Explanation from Specification
Harish Naik
Gyorgy Turán
XAI
47
0
0
13 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaMLFAtt
64
1
0
13 Dec 2020
xRAI: Explainable Representations through AI
xRAI: Explainable Representations through AI
Christian Bartelt
Sascha Marton
Heiner Stuckenschmidt
8
2
0
10 Dec 2020
Hybrid analytic and machine-learned baryonic property insertion into
  galactic dark matter haloes
Hybrid analytic and machine-learned baryonic property insertion into galactic dark matter haloes
Ben Moews
R. Davé
Sourav Mitra
Sultan Hassan
W. Cui
AI4CE
132
7
0
10 Dec 2020
Influence-Driven Explanations for Bayesian Network Classifiers
Influence-Driven Explanations for Bayesian Network Classifiers
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
92
9
0
10 Dec 2020
Deep Argumentative Explanations
Deep Argumentative Explanations
Emanuele Albini
Piyawat Lertvittayakumjorn
Antonio Rago
Francesca Toni
AAML
52
5
0
10 Dec 2020
Previous
123...151617...212223
Next