ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13150
50
0

Readable Twins of Unreadable Models

17 April 2025
Krzysztof Pancerz
Piotr Kulicki
Michał Kalisz
Andrzej Burda
Maciej Stanisławski
Jaromir Sarzyński
    SyDa
ArXivPDFHTML
Abstract

Creating responsible artificial intelligence (AI) systems is an important issue in contemporary research and development of works on AI. One of the characteristics of responsible AI systems is their explainability. In the paper, we are interested in explainable deep learning (XDL) systems. On the basis of the creation of digital twins of physical objects, we introduce the idea of creating readable twins (in the form of imprecise information flow models) for unreadable deep learning models. The complete procedure for switching from the deep learning model (DLM) to the imprecise information flow model (IIFM) is presented. The proposed approach is illustrated with an example of a deep learning classification model for image recognition of handwritten digits from the MNIST data set.

View on arXiv
@article{pancerz2025_2504.13150,
  title={ Readable Twins of Unreadable Models },
  author={ Krzysztof Pancerz and Piotr Kulicki and Michał Kalisz and Andrzej Burda and Maciej Stanisławski and Jaromir Sarzyński },
  journal={arXiv preprint arXiv:2504.13150},
  year={ 2025 }
}
Comments on this paper