85
107

Recover Canonical-View Faces in the Wild with Deep Neural Networks

Abstract

Face images in the wild undergo large intra-personal variations, such as poses, illuminations, occlusions, and low resolutions, which cause great challenges to face-related applications. This paper addresses this challenge by proposing a new deep learning framework that can recover the canonical view of face images. It dramatically reduces the intra-person variances, while maintaining the inter-person discriminativeness. Unlike the existing face reconstruction methods that were either evaluated in controlled 2D environment or employed 3D information, our approach directly learns the transformation from the face images with a complex set of variations to their canonical views. At the training stage, to avoid the costly process of labeling canonical-view images from the training set by hand, we have devised a new measurement to automatically select or synthesize a canonical-view image for each identity. As an application, this face recovery approach is used for face verification. Facial features are learned from the recovered canonical-view face images by using a facial component-based convolutional neural network. Our approach achieves the state-of-the-art performance on the LFW dataset.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.