Deep learning tools have recently gained much attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. We show that dropout in neural networks (NNs) can be interpreted as a Bayesian approximation. As a direct result we obtain tools for modelling uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing computational complexity or test accuracy. We perform an exploratory study of the dropout uncertainty properties. Various network architectures and non-linearities are assessed on tasks of extrapolation, interpolation, and classification. We show that model uncertainty is indispensable for classification using MNIST as an example, and use the model's uncertainty in a Bayesian pipeline with deep reinforcement learning as a practical task.
View on arXiv