ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04261
82
1

VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas

6 March 2025
Georgios Makridis
Vasileios Koukos
G. Fatouros
D. Kyriazis
ArXivPDFHTML
Abstract

In today's data-driven era, computational systems generate vast amounts of data that drive the digital transformation of industries, where Artificial Intelligence (AI) plays a key role. Currently, the demand for eXplainable AI (XAI) has increased to enhance the interpretability, transparency, and trustworthiness of AI models. However, evaluating XAI methods remains challenging: existing evaluation frameworks typically focus on quantitative properties such as fidelity, consistency, and stability without taking into account qualitative characteristics such as satisfaction and interpretability. In addition, practitioners face a lack of guidance in selecting appropriate datasets, AI models, and XAI methods -a major hurdle in human-AI collaboration. To address these gaps, we propose a framework that integrates quantitative benchmarking with qualitative user assessments through virtual personas based on the "Anthology" of backstories of the Large Language Model (LLM). Our framework also incorporates a content-based recommender system that leverages dataset-specific characteristics to match new input data with a repository of benchmarked datasets. This yields an estimated XAI score and provides tailored recommendations for both the optimal AI model and the XAI method for a given scenario.

View on arXiv
@article{makridis2025_2503.04261,
  title={ VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas },
  author={ Georgios Makridis and Vasileios Koukos and Georgios Fatouros and Dimosthenis Kyriazis },
  journal={arXiv preprint arXiv:2503.04261},
  year={ 2025 }
}
Comments on this paper