ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.01933
  4. Cited By
A Survey Of Methods For Explaining Black Box Models

A Survey Of Methods For Explaining Black Box Models

6 February 2018
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
    XAI
ArXivPDFHTML

Papers citing "A Survey Of Methods For Explaining Black Box Models"

50 / 410 papers shown
Title
InfFeed: Influence Functions as a Feedback to Improve the Performance of
  Subjective Tasks
InfFeed: Influence Functions as a Feedback to Improve the Performance of Subjective Tasks
Somnath Banerjee
Maulindu Sarkar
Punyajoy Saha
Binny Mathew
Animesh Mukherjee
TDI
34
0
0
22 Feb 2024
On Explaining Unfairness: An Overview
On Explaining Unfairness: An Overview
Christos Fragkathoulas
Vasiliki Papanikou
Danae Pla Karidi
E. Pitoura
XAI
FaML
19
2
0
16 Feb 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
37
2
0
15 Feb 2024
AI, Meet Human: Learning Paradigms for Hybrid Decision Making Systems
AI, Meet Human: Learning Paradigms for Hybrid Decision Making Systems
Clara Punzi
Roberto Pellungrini
Mattia Setzu
F. Giannotti
D. Pedreschi
25
5
0
09 Feb 2024
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
A Systematic Literature Review on Explainability for Machine/Deep Learning-based Software Engineering Research
Sicong Cao
Xiaobing Sun
Ratnadira Widyasari
David Lo
Xiaoxue Wu
...
Jiale Zhang
Bin Li
Wei Liu
Di Wu
Yixin Chen
33
6
0
26 Jan 2024
Explainable Bayesian Optimization
Explainable Bayesian Optimization
Tanmay Chakraborty
Christin Seifert
Christian Wirth
61
5
0
24 Jan 2024
Deep spatial context: when attention-based models meet spatial
  regression
Deep spatial context: when attention-based models meet spatial regression
Paulina Tomaszewska
El.zbieta Sienkiewicz
Mai P. Hoang
Przemysław Biecek
21
1
0
18 Jan 2024
Robust Stochastic Graph Generator for Counterfactual Explanations
Robust Stochastic Graph Generator for Counterfactual Explanations
Mario Alfonso Prado-Romero
Bardh Prenkaj
Giovanni Stilo
CML
13
3
0
18 Dec 2023
Perceptual Musical Features for Interpretable Audio Tagging
Perceptual Musical Features for Interpretable Audio Tagging
Vassilis Lyberatos
Spyridon Kantarelis
Edmund Dervakos
Giorgos Stamou
32
5
0
18 Dec 2023
Accelerating the Global Aggregation of Local Explanations
Accelerating the Global Aggregation of Local Explanations
Alon Mor
Yonatan Belinkov
B. Kimelfeld
FAtt
29
3
0
13 Dec 2023
Trust, distrust, and appropriate reliance in (X)AI: a survey of
  empirical evaluation of user trust
Trust, distrust, and appropriate reliance in (X)AI: a survey of empirical evaluation of user trust
Roel W. Visser
Tobias M. Peters
Ingrid Scharlau
Barbara Hammer
23
5
0
04 Dec 2023
Machine Learning For An Explainable Cost Prediction of Medical Insurance
Machine Learning For An Explainable Cost Prediction of Medical Insurance
U. Orji
Elochukwu A. Ukwandu
26
31
0
23 Nov 2023
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
29
0
0
20 Nov 2023
Deep Natural Language Feature Learning for Interpretable Prediction
Deep Natural Language Feature Learning for Interpretable Prediction
Felipe Urrutia
Cristian Buc
Valentin Barriere
26
1
0
09 Nov 2023
Notion of Explainable Artificial Intelligence -- An Empirical
  Investigation from A Users Perspective
Notion of Explainable Artificial Intelligence -- An Empirical Investigation from A Users Perspective
A. Haque
A. Najmul Islam
Patrick Mikalef
36
1
0
01 Nov 2023
Learning impartial policies for sequential counterfactual explanations
  using Deep Reinforcement Learning
Learning impartial policies for sequential counterfactual explanations using Deep Reinforcement Learning
E. Panagiotou
Eirini Ntoutsi
CML
OffRL
BDL
25
0
0
01 Nov 2023
A Mass-Conserving-Perceptron for Machine Learning-Based Modeling of
  Geoscientific Systems
A Mass-Conserving-Perceptron for Machine Learning-Based Modeling of Geoscientific Systems
Yuan-Heng Wang
Hoshin V. Gupta
AI4CE
35
6
0
12 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
34
1
0
26 Sep 2023
An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of
  Service-oriented Systems
An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems
Andreas Metzger
Jon Bartel
Jan Laufer
24
2
0
25 Sep 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
37
4
0
21 Sep 2023
Beyond XAI:Obstacles Towards Responsible AI
Beyond XAI:Obstacles Towards Responsible AI
Yulu Pi
37
2
0
07 Sep 2023
Ensemble of Counterfactual Explainers
Ensemble of Counterfactual Explainers
Riccardo Guidotti
Salvatore Ruggieri
CML
16
8
0
29 Aug 2023
SurvBeX: An explanation method of the machine learning survival models
  based on the Beran estimator
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
32
4
0
07 Aug 2023
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection
  Model
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model
Khushnaseeb Roshan
Aasim Zafar
17
16
0
31 Jul 2023
Toward Transparent Sequence Models with Model-Based Tree Markov Model
Toward Transparent Sequence Models with Model-Based Tree Markov Model
Chan Hsu
Wei Huang
Jun-Ting Wu
Chih-Yuan Li
Yihuang Kang
31
0
0
28 Jul 2023
Assessment of the suitability of degradation models for the planning of
  CCTV inspections of sewer pipes
Assessment of the suitability of degradation models for the planning of CCTV inspections of sewer pipes
Fidae El Morer
Stefan H. A. Wittek
Andreas Rausch
19
3
0
12 Jul 2023
A User Study on Explainable Online Reinforcement Learning for Adaptive
  Systems
A User Study on Explainable Online Reinforcement Learning for Adaptive Systems
Andreas Metzger
Jan Laufer
Felix Feit
Klaus Pohl
OffRL
OnRL
24
1
0
09 Jul 2023
Reliable AI: Does the Next Generation Require Quantum Computing?
Reliable AI: Does the Next Generation Require Quantum Computing?
Aras Bacho
Holger Boche
Gitta Kutyniok
26
2
0
03 Jul 2023
Towards Explainable TOPSIS: Visual Insights into the Effects of Weights and Aggregations on Rankings
Towards Explainable TOPSIS: Visual Insights into the Effects of Weights and Aggregations on Rankings
R. Susmaga
Izabela Szczech
D. Brzezinski
20
16
0
13 Jun 2023
Explainable Predictive Maintenance
Explainable Predictive Maintenance
Sepideh Pashami
Sławomir Nowaczyk
Yuantao Fan
Jakub Jakubowski
Nuno Paiva
...
Bruno Veloso
M. Sayed-Mouchaweh
L. Rajaoarisoa
Grzegorz J. Nalepa
João Gama
32
8
0
08 Jun 2023
Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed
  on Orbits
Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed on Orbits
Zhuokai Zhao
Takumi Matsuzawa
W. Irvine
Michael Maire
G. Kindlmann
45
2
0
31 May 2023
Reason to explain: Interactive contrastive explanations (REASONX)
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State
Salvatore Ruggieri
Franco Turini
LRM
30
1
0
29 May 2023
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
31
1
0
20 May 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
37
1
0
18 May 2023
Open problems in causal structure learning: A case study of COVID-19 in
  the UK
Open problems in causal structure learning: A case study of COVID-19 in the UK
Anthony C. Constantinou
N. K. Kitson
Yang Liu
Kiattikun Chobtham
Arian Hashemzadeh
Praharsh Nanavati
R. Mbuvha
Bruno Petrungaro
CML
32
9
0
05 May 2023
The Power of Typed Affine Decision Structures: A Case Study
The Power of Typed Affine Decision Structures: A Case Study
Gerrit Nolte
Maximilian Schlüter
Alnis Murtovi
Bernhard Steffen
AAML
20
3
0
28 Apr 2023
Communicating Uncertainty in Machine Learning Explanations: A
  Visualization Analytics Approach for Predictive Process Monitoring
Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring
Nijat Mehdiyev
Maxim Majlatow
Peter Fettke
30
2
0
12 Apr 2023
Learning Optimal Fair Scoring Systems for Multi-Class Classification
Learning Optimal Fair Scoring Systems for Multi-Class Classification
Julien Rouzot
Julien Ferry
Marie-José Huguet
FaML
19
8
0
11 Apr 2023
A Review on Explainable Artificial Intelligence for Healthcare: Why,
  How, and When?
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
26
56
0
10 Apr 2023
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language
  Models
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
Emilio Ferrara
SILM
36
247
0
07 Apr 2023
Local Interpretability of Random Forests for Multi-Target Regression
Local Interpretability of Random Forests for Multi-Target Regression
Avraam Bardos
Nikolaos Mylonas
Ioannis Mollas
Grigorios Tsoumakas
AAML
19
2
0
29 Mar 2023
Improving Prediction Performance and Model Interpretability through
  Attention Mechanisms from Basic and Applied Research Perspectives
Improving Prediction Performance and Model Interpretability through Attention Mechanisms from Basic and Applied Research Perspectives
Shunsuke Kitada
FaML
HAI
AI4CE
32
0
0
24 Mar 2023
Explaining Groups of Instances Counterfactually for XAI: A Use Case,
  Algorithm and User Study for Group-Counterfactuals
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Greta Warren
Markt. Keane
Christophe Guéret
Eoin Delaney
26
13
0
16 Mar 2023
"How to make them stay?" -- Diverse Counterfactual Explanations of
  Employee Attrition
"How to make them stay?" -- Diverse Counterfactual Explanations of Employee Attrition
André Artelt
Andreas Gregoriades
31
5
0
08 Mar 2023
RACCER: Towards Reachable and Certain Counterfactual Explanations for
  Reinforcement Learning
RACCER: Towards Reachable and Certain Counterfactual Explanations for Reinforcement Learning
Jasmina Gajcin
Ivana Dusparic
CML
32
3
0
08 Mar 2023
Learning Human-Compatible Representations for Case-Based Decision
  Support
Learning Human-Compatible Representations for Case-Based Decision Support
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
28
5
0
06 Mar 2023
NxPlain: Web-based Tool for Discovery of Latent Concepts
NxPlain: Web-based Tool for Discovery of Latent Concepts
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
Tamim Jaban
Musab Husaini
Ummar Abbas
15
1
0
06 Mar 2023
A System's Approach Taxonomy for User-Centred XAI: A Survey
A System's Approach Taxonomy for User-Centred XAI: A Survey
Ehsan Emamirad
Pouya Ghiasnezhad Omran
A. Haller
S. Gregor
31
1
0
06 Mar 2023
SUNY: A Visual Interpretation Framework for Convolutional Neural
  Networks from a Necessary and Sufficient Perspective
SUNY: A Visual Interpretation Framework for Convolutional Neural Networks from a Necessary and Sufficient Perspective
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Z. Kong
Kwan-Liu Ma
AAML
FAtt
35
2
0
01 Mar 2023
Concept Learning for Interpretable Multi-Agent Reinforcement Learning
Concept Learning for Interpretable Multi-Agent Reinforcement Learning
Renos Zabounidis
Joseph Campbell
Simon Stepputtis
Dana Hughes
Katia P. Sycara
39
15
0
23 Feb 2023
Previous
123456789
Next