12
5

On the Transformation of Latent Space in Autoencoders

Abstract

Noting the importance of the latent variables in inference and learning, we propose a novel framework for autoencoders based on the homeomorphic transformation of latent variables, which could reduce the distance between vectors in the transformed space, while preserving the topological properties of the original space, and investigate the effect of the latent space transformation on learning generative models and denoising corrupted data. The experimental results demonstrate that our generative and denoising models based on the proposed framework can provide better performance than conventional variational and denoising autoencoders due to the transformation, where we evaluate the performance of generative and denoising models in terms of the Hausdorff distance between the sets of training and processed i.e., either generated or denoised images, which can objectively measure their differences, as well as through direct comparison of the visual characteristics of the processed images.

View on arXiv
Comments on this paper