Reconstruction for Feature Disentanglement in Pose-invariant Face
Recognition
- CVBM
Deep neural networks (DNNs) trained on large-scale datasets have recently achieved impressive improvements in face recognition. But a persistent challenge remains to develop methods capable of handling large pose variations that are relatively under-represented in training data. This paper presents a method for learning a feature representation that is invariant to pose, without requiring extensive pose coverage in training data. We first propose to use a synthesis network for generating non-frontal views from a single frontal image, in order to increase the diversity of training data while preserving accurate facial details that are critical for identity discrimination. Our next contribution is a multi-source multi-task DNN that seeks a rich embedding representing identity information, as well as information such as pose and landmark locations. Finally, we propose a Siamese network to explicitly disentangle identity and pose, by demanding alignment between the feature reconstructions through various combinations of identity and pose features obtained from two images of the same subject. Experiments on face datasets in both controlled and wild scenarios, such as MultiPIE, LFW and 300WLP, show that our method consistently outperforms the state-of-the-art, especially on images with large head pose variations.
View on arXiv