101
7
v1v2 (latest)

Dropout as a Regularizer of Interaction Effects

Abstract

We examine Dropout through the perspective of interactions. This view provides a symmetry to explain Dropout: given NN variables, there are (Nk){N \choose k} possible sets of kk variables to form an interaction (i.e. O(Nk)\mathcal{O}(N^k)); conversely, the probability an interaction of kk variables survives Dropout at rate pp is (1p)k(1-p)^k (decaying with kk). These rates effectively cancel, and so Dropout regularizes against higher-order interactions. We prove this perspective analytically and empirically. This perspective of Dropout as a regularizer against interaction effects has several practical implications: (1) higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions, (2) caution should be exercised when interpreting Dropout-based explanations and uncertainty measures, and (3) networks trained with Input Dropout are biased estimators. We also compare Dropout to other regularizers and find that it is difficult to obtain the same selective pressure against high-order interactions.

View on arXiv
Comments on this paper