Transfer learning (TL) with deep convolutional neural networks (DCNNs) is crucial for modern medical image classification (MIC). However, the current practice of finetuning the entire pretrained model is puzzling, as most MIC tasks rely only on low- to mid-level features that are learned by up to mid layers of DCNNs. To resolve the puzzle, we perform careful empirical comparisons of several existing deep and shallow models, and propose a novel truncated TL method that consistently leads to comparable or superior performance and compact models on two MIC tasks. Our results highlight the importance of transferring the right level of pretrained visual features commensurate with the intrinsic complexity of the task.
View on arXiv