Empirically Analyzing the Effect of Dataset Biases on Deep Face Recognition Systems
- CVBM

It is unknown what kind of biases modern in the wild face datasets have because of their lack of annotation. A di- rect consequence of this is that total recognition rates alone only provide limited insight about the generalization abil- ity of a Deep Convolutional Neural Networks (DCNNs). We propose to empirically study the effect of different types of dataset biases on the generalization ability of DCNNs. Us- ing synthetically generated face images, we study the face recognition rate as a function of interpretable parameters such as face pose and light. The proposed method allows valuable details about the generalization performance of different DCNN architectures to be observed and compared. In our experiments, we find that: 1) Indeed, dataset bias have a significant influence on the generalization perfor- mance of DCNNs. 2) DCNNs can generalize surprisingly well to unseen illumination conditions and large sampling gaps in the pose variation. 3) We uncover a main limita- tion of current DCNN architectures, which is the difficulty to generalize when different identities to not share the same pose variation. 4) We demonstrate that our findings on syn- thetic data also apply when learning from real world data. Our face image generator is publicly available to enable the community to benchmark other DCNN architectures.
View on arXiv