Sparse Distributed representation is the key to learning useful features in deep learning algorithms not just because it is an efficient mode of data representation, but more importantly, because it captures the generation process of most real world data. Although a number of regularized auto-encoders (AE) enforce sparsity explicitly in their learned representation while others don't, there has been little formal analysis on what encourages sparsity in these models in general. Therefore, our objective here is to formally study this general problem for regularized auto-encoders. We show the properties of both regularization and activation function that play an important role in encouraging sparsity. We provide sufficient conditions on both these criteria and show that multiple popular models-- eg. De-noising and Contractive auto encoders-- and activations-- eg. Rectified Linear and Sigmoid-- satisfy these conditions; thus explaining sparsity in their learned representation. Our theoretical and empirical analysis together, throws light on the properties of regularization/activation that are conducive to sparsity, but also brings together a number of existing auto-encoder models and activation functions under a unified analytical framework thereby yielding deeper insights into unsupervised representation learning.
View on arXiv