ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.07243
29
1

Neuropsychology and Explainability of AI: A Distributional Approach to the Relationship Between Activation Similarity of Neural Categories in Synthetic Cognition

23 October 2024
Michael Pichat
Enola Campoli
William Pogrund
Jourdan Wilson
Michael Veillet-Guillem
Anton Melkozerov
Paloma Pichat
Armanush Gasparian
Samuel Demarchi
Judicael Poumay
ArXivPDFHTML
Abstract

We propose a neuropsychological approach to the explainability of artificial neural networks, which involves using concepts from human cognitive psychology as relevant heuristic references for developing synthetic explanatory frameworks that align with human modes of thought. The analogical concepts mobilized here, which are intended to create such an epistemological bridge, are those of categorization and similarity, as these notions are particularly suited to the categorical "nature" of the reconstructive information processing performed by artificial neural networks. Our study aims to reveal a unique process of synthetic cognition, that of the categorical convergence of highly activated tokens. We attempt to explain this process with the idea that the categorical segment created by a neuron is actually the result of a superposition of categorical sub-dimensions within its input vector space.

View on arXiv
Comments on this paper