21
0

Extending the Relative Seriality Formalism for Interpretable Deep Learning of Normal Tissue Complication Probability Models

Abstract

We formally demonstrate that the relative seriality model of Kallman, et al. maps exactly onto a simple type of convolutional neural network. This approach leads to a natural interpretation of feedforward connections in the convolutional layer and stacked intermediate pooling layers in terms of bystander effects and hierarchical tissue organization, respectively. These results serve as proof-of-principle for radiobiologically interpretable deep learning of normal tissue complication probability using large-scale imaging and dosimetry datasets.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.