Data, Depth, and Design: Learning Reliable Models for Melanoma Screening

Art on melanoma screening evolved steeply in the last two years, with the adoption of deep learning, but those models pose challenges of their own, as they are expensive to train, and complex to parameterize. We shed light at those difficulties, with an exhaustive evaluation of nine commonly found open choices faced when picking or designing deep networks for melanoma screening: model architecture, training dataset, image resolution, data augmentation, input normalization, use of segmentation, duration of training, additional use of SVM, and test data augmentation. We perform a full 2-level factorial, for five different test datasets, resulting in 2560 experiments, which we analyze with a multi-way ANOVA. The main finding is that the size of training data has a disproportional influence, explaining almost half the variation in performance. Of the other factors, test data augmentation and input resolution are the most helpful. The use of deeper models if combined with extra data, also helps. We show that the expensive full-factorial design, or the unreliable sequential optimization are not the only options: ensembling models allow obtaining reliable results with limited resources. We also warn against the very common practice of hyperoptimizing and testing on the same dataset, showing the clear (and unfair) increases this practice brings to performance metrics, leading to overoptimistic results.
View on arXiv