ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.02098
  4. Cited By
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring

Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring

4 September 2020
Nijat Mehdiyev
Peter Fettke
    AI4TS
ArXivPDFHTML

Papers citing "Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring"

32 / 32 papers shown
Title
A systematic review and taxonomy of explanations in decision support and
  recommender systems
A systematic review and taxonomy of explanations in decision support and recommender systems
Ingrid Nunes
Dietmar Jannach
XAI
38
329
0
15 Jun 2020
Predictive Business Process Monitoring via Generative Adversarial Nets:
  The Case of Next Event Prediction
Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction
Farbod Taymouri
M. Rosa
S. Erfani
Z. Bozorgi
I. Verenich
GAN
AAML
27
86
0
25 Mar 2020
AI Trust in business processes: The need for process-aware explanations
AI Trust in business processes: The need for process-aware explanations
Steve T. K. Jan
Vatche Isahagian
Vinod Muthusamy
26
23
0
21 Jan 2020
Exploring Interpretability for Predictive Process Analytics
Exploring Interpretability for Predictive Process Analytics
Renuka Sindhgatta
Chun Ouyang
Catarina Moreira
14
2
0
22 Dec 2019
Explaining Explanations in AI
Explaining Explanations in AI
Brent Mittelstadt
Chris Russell
Sandra Wachter
XAI
83
664
0
04 Nov 2018
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to
  Parameter Values
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo
Justin Gilmer
Ian Goodfellow
Been Kim
FAtt
AAML
46
128
0
08 Oct 2018
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
35
154
0
29 Sep 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
60
524
0
21 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
101
164
0
20 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
136
1,164
0
19 Jun 2018
Locally Interpretable Models and Effects based on Supervised
  Partitioning (LIME-SUP)
Locally Interpretable Models and Effects based on Supervised Partitioning (LIME-SUP)
Linwei Hu
Jie Chen
V. Nair
Agus Sudjianto
FAtt
40
63
0
02 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
75
1,849
0
31 May 2018
Local Rule-Based Explanations of Black Box Decision Systems
Local Rule-Based Explanations of Black Box Decision Systems
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
D. Pedreschi
Franco Turini
F. Giannotti
118
436
0
28 May 2018
Predictive Process Monitoring Methods: Which One Suits Me Best?
Predictive Process Monitoring Methods: Which One Suits Me Best?
Chiara Di Francescomarino
Chiara Ghidini
F. Maggi
Fredrik P. Milani
31
146
0
06 Apr 2018
On Cognitive Preferences and the Plausibility of Rule-based Models
On Cognitive Preferences and the Plausibility of Rule-based Models
Johannes Furnkranz
Tomáš Kliegr
Heiko Paulheim
LRM
45
69
0
04 Mar 2018
Consistent Individualized Feature Attribution for Tree Ensembles
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg
G. Erion
Su-In Lee
FAtt
TDI
55
1,379
0
12 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
90
3,922
0
06 Feb 2018
Visual Analytics in Deep Learning: An Interrogative Survey for the Next
  Frontiers
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Fred Hohman
Minsuk Kahng
Robert S. Pienta
Duen Horng Chau
OOD
HAI
68
538
0
21 Jan 2018
Distilling a Neural Network Into a Soft Decision Tree
Distilling a Neural Network Into a Soft Decision Tree
Nicholas Frosst
Geoffrey E. Hinton
245
635
0
27 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
227
4,229
0
22 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
192
2,215
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
651
21,613
0
22 May 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
142
5,920
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
354
3,742
0
28 Feb 2017
Predicting Process Behaviour using Deep Learning
Predicting Process Behaviour using Deep Learning
Joerg Evermann
Jana-Rebecca Rehse
Peter Fettke
59
353
0
14 Dec 2016
Predictive Business Process Monitoring with LSTM Neural Networks
Predictive Business Process Monitoring with LSTM Neural Networks
Niek Tax
I. Verenich
M. Rosa
Marlon Dumas
36
444
0
07 Dec 2016
Grad-CAM: Why did you say that?
Grad-CAM: Why did you say that?
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
FAtt
50
469
0
22 Nov 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
134
3,672
0
10 Jun 2016
Time and Activity Sequence Prediction of Business Process Instances
Time and Activity Sequence Prediction of Business Process Instances
Mirko Polato
A. Sperduti
Andrea Burattin
M. Leoni
AI4TS
22
144
0
24 Feb 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
681
16,828
0
16 Feb 2016
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
191
4,653
0
21 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
207
7,252
0
20 Dec 2013
1