126
258

Stacked What-Where Auto-encoders

Abstract

We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides an unified approach to supervised, semi-supervised and unsupervised learning without requiring sampling. An instantiation of SWWAE is essentially a convolutional net (Convnet) coupled with a deconvolutional net (Deconvnet). The objective function includes reconstruction terms that penalizes the hidden states in the Deconvnet for being different from the hidden state of the Convnet. Each pooling layer is seen producing two sets of variables: the "what" which arefed to the next layer, and the "where" (the max-pooling switch positions) that arefed to the corresponding layer in the generative decoder.

View on arXiv
Comments on this paper