A Bottom-up Approach for Pancreas Segmentation using Cascaded Superpixels and (Deep) Image Patch Labeling

Robust automated organ segmentation is a prerequisite for computer-aided diagnosis (CAD), quantitative imaging analysis, detection of pathologies and surgical assistance. We present a fully-automated bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans. The method is based on a hierarchical cascade of information propagation by classifying image patches at different resolutions and (segments) superpixels. There are four stages in the system: 1) decomposing CT slice images as a set of disjoint boundary-preserving superpixels; 2) computing pancreas class probability maps via dense patch labeling; 3) classifying superpixels by pooling both intensity and probability features to form empirical statistics in cascaded random forest frameworks; and 4) simple connectivity based post-processing. The dense image patch labeling is conducted by two schemes: efficient random forest classifier on image histogram, location and texture features; and more expensive (but with better specificity) deep convolutional neural network classification, on larger image windows (i.e., with more spatial contexts). Oversegmented 2D CT slices by the Simple Linear Iterative Clustering approach are adopted through model/parameter calibration and labeled at the superpixel level for positive (pancreas) or negative (non-pancreas background) classes. Evaluation of the approach is done on a database of 80 manually segmented CT volumes in six-fold cross-validation. Our achieved results are comparable, or better than the state-of-the-art methods (evaluated by leave-one-patient-out), with a Dice coefficient of 70.7% and Jaccard Index of 57.9%. In addition, the computational efficiency has been drastically improved in the order of 6~8 minutes, comparing with others of >=10 hours per testing case.
View on arXiv