This paper examines the assumptions of the derived equivalence between dropout noise injection and regularisation for logistic regression with negative log loss. We show that the approximation method is based on a divergent Taylor expansion, making, subsequent work using this approximation to compare the dropout trained logistic regression model with standard regularisers unfortunately ill-founded to date. Moreover, the approximation approach is shown to be invalid using any robust constraints. We show how this finding extends to general neural network topologies that use a cross-entropy prediction layer.
View on arXiv