ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1505.05561
96
68
v1v2v3v4v5 (latest)

Why Regularized Auto-Encoders learn Sparse Representation?

21 May 2015
Devansh Arpit
Yingbo Zhou
H. Ngo
V. Govindaraju
ArXiv (abs)PDFHTML
Abstract

Sparse Distributed representation is the key to learning useful features in deep learning algorithms not just because it is an efficient mode of data representation, but more importantly, because it captures the generation process of most real world data. Although a number of regularized auto-encoders (AE) enforce sparsity explicitly in their learned representation while others don't, there has been little formal analysis on what encourages sparsity in these models in general. Therefore, our objective here is to formally study this general problem for regularized auto-encoders. We show the properties of both regularization and activation function that play an important role in encouraging sparsity. We provide sufficient conditions on both these criteria and show that multiple popular models-- eg. De-noising and Contractive auto encoders-- and activations-- eg. Rectified Linear and Sigmoid-- satisfy these conditions; thus explaining sparsity in their learned representation. Our theoretical and empirical analysis together, throws light on the properties of regularization/activation that are conducive to sparsity, but also brings together a number of existing auto-encoder models and activation functions under a unified analytical framework thereby yielding deeper insights into unsupervised representation learning.

View on arXiv
Comments on this paper