ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16617
12
0

The Role of Explanation Styles and Perceived Accuracy on Decision Making in Predictive Process Monitoring

19 June 2025
Soobin Chae
Suhwan Lee
Hanna Hauptmann
H. Reijers
Xixi Lu
ArXiv (abs)PDFHTML
Main:15 Pages
5 Figures
Bibliography:3 Pages
5 Tables
Abstract

Predictive Process Monitoring (PPM) often uses deep learning models to predict the future behavior of ongoing processes, such as predicting process outcomes. While these models achieve high accuracy, their lack of interpretability undermines user trust and adoption. Explainable AI (XAI) aims to address this challenge by providing the reasoning behind the predictions. However, current evaluations of XAI in PPM focus primarily on functional metrics (such as fidelity), overlooking user-centered aspects such as their effect on task performance and decision-making. This study investigates the effects of explanation styles (feature importance, rule-based, and counterfactual) and perceived AI accuracy (low or high) on decision-making in PPM. We conducted a decision-making experiment, where users were presented with the AI predictions, perceived accuracy levels, and explanations of different styles. Users' decisions were measured both before and after receiving explanations, allowing the assessment of objective metrics (Task Performance and Agreement) and subjective metrics (Decision Confidence). Our findings show that perceived accuracy and explanation style have a significant effect.

View on arXiv
@article{chae2025_2506.16617,
  title={ The Role of Explanation Styles and Perceived Accuracy on Decision Making in Predictive Process Monitoring },
  author={ Soobin Chae and Suhwan Lee and Hanna Hauptmann and Hajo A. Reijers and Xixi Lu },
  journal={arXiv preprint arXiv:2506.16617},
  year={ 2025 }
}
Comments on this paper