21
0

PointExplainer: Towards Transparent Parkinson's Disease Diagnosis

Abstract

Deep neural networks have shown potential in analyzing digitized hand-drawn signals for early diagnosis of Parkinson's disease. However, the lack of clear interpretability in existing diagnostic methods presents a challenge to clinical trust. In this paper, we propose PointExplainer, an explainable diagnostic strategy to identify hand-drawn regions that drive model diagnosis. Specifically, PointExplainer assigns discrete attribution values to hand-drawn segments, explicitly quantifying their relative contributions to the model's decision. Its key components include: (i) a diagnosis module, which encodes hand-drawn signals into 3D point clouds to represent hand-drawn trajectories, and (ii) an explanation module, which trains an interpretable surrogate model to approximate the local behavior of the black-box diagnostic model. We also introduce consistency measures to further address the issue of faithfulness in explanations. Extensive experiments on two benchmark datasets and a newly constructed dataset show that PointExplainer can provide intuitive explanations with no diagnostic performance degradation. The source code is available atthis https URL.

View on arXiv
@article{wang2025_2505.03833,
  title={ PointExplainer: Towards Transparent Parkinson's Disease Diagnosis },
  author={ Xuechao Wang and Sven Nomm and Junqing Huang and Kadri Medijainen and Aaro Toomela and Michael Ruzhansky },
  journal={arXiv preprint arXiv:2505.03833},
  year={ 2025 }
}
Comments on this paper