ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21513
20
0

Enhancing Vision Transformer Explainability Using Artificial Astrocytes

20 May 2025
Nicolas Echevarrieta-Catalan
Ana Ribas-Rodriguez
Francisco Cedron
Odelia Schwartz
Vanessa Aguiar-Pulido
ArXiv (abs)PDFHTML
Main:5 Pages
4 Figures
Bibliography:2 Pages
2 Tables
Abstract

Machine learning models achieve high precision, but their decision-making processes often lack explainability. Furthermore, as model complexity increases, explainability typically decreases. Existing efforts to improve explainability primarily involve developing new eXplainable artificial intelligence (XAI) techniques or incorporating explainability constraints during training. While these approaches yield specific improvements, their applicability remains limited. In this work, we propose the Vision Transformer with artificial Astrocytes (ViTA). This training-free approach is inspired by neuroscience and enhances the reasoning of a pretrained deep neural network to generate more human-aligned explanations. We evaluated our approach employing two well-known XAI techniques, Grad-CAM and Grad-CAM++, and compared it to a standard Vision Transformer (ViT). Using the ClickMe dataset, we quantified the similarity between the heatmaps produced by the XAI techniques and a (human-aligned) ground truth. Our results consistently demonstrate that incorporating artificial astrocytes enhances the alignment of model explanations with human perception, leading to statistically significant improvements across all XAI techniques and metrics utilized.

View on arXiv
@article{echevarrieta-catalan2025_2505.21513,
  title={ Enhancing Vision Transformer Explainability Using Artificial Astrocytes },
  author={ Nicolas Echevarrieta-Catalan and Ana Ribas-Rodriguez and Francisco Cedron and Odelia Schwartz and Vanessa Aguiar-Pulido },
  journal={arXiv preprint arXiv:2505.21513},
  year={ 2025 }
}
Comments on this paper