HiResCAM: Faithful Location Representation in Visual Attention for
Explainable 3D Medical Image Classification
- FAtt
Understanding model predictions is critical in healthcare, to facilitate rapid verification of model correctness and to guard against the use of models that exploit confounding variables. Here we address the challenging new task of explainable multilabel classification of volumetric medical images. We first illustrate a previously unrecognized limitation of the popular model explanation method Grad-CAM: as a side effect of the gradient averaging step, Grad-CAM sometimes highlights the wrong location. To solve this problem, we propose HiResCAM, a novel label-specific attention mechanism that is guaranteed to highlight only the locations the model used to make each prediction. Next, we introduce a mask loss that leverages HiResCAM to encourage the model to predict abnormalities based only on the organs in which those abnormalities appear. Our innovations result in a 37% improvement in explanation quality, resulting in state-of-the-art weakly supervised organ localization of multiple abnormalities in the RAD-ChestCT data set of 36,316 CT volumes. We also demonstrate on PASCAL VOC 2012 the different properties of HiResCAM and Grad-CAM on natural images. Overall, this work advances convolutional neural network explanation approaches and the clinical applicability of multiple abnormality modeling in volumetric medical images.
View on arXiv