15
0

Can Vision Transformers with ResNet's Global Features Fairly Authenticate Demographic Faces?

Main:10 Pages
8 Figures
Bibliography:4 Pages
1 Tables
Abstract

Biometric face authentication is crucial in computer vision, but ensuring fairness and generalization across demographic groups remains a big challenge. Therefore, we investigated whether Vision Transformer (ViT) and ResNet, leveraging pre-trained global features, can fairly authenticate different demographic faces while relying minimally on local features. In this investigation, we used three pre-trained state-of-the-art (SOTA) ViT foundation models from Facebook, Google, and Microsoft for global features as well as ResNet-18. We concatenated the features from ViT and ResNet, passed them through two fully connected layers, and trained on customized face image datasets to capture the local features. Then, we designed a novel few-shot prototype network with backbone features embedding. We also developed new demographic face image support and query datasets for this empirical study. The network's testing was conducted on this dataset in one-shot, three-shot, and five-shot scenarios to assess how performance improves as the size of the support set increases. We observed results across datasets with varying races/ethnicities, genders, and age groups. The Microsoft Swin Transformer backbone performed better among the three SOTA ViT for this task. The code and data are available at:this https URL.

View on arXiv
@article{sufian2025_2506.05383,
  title={ Can Vision Transformers with ResNet's Global Features Fairly Authenticate Demographic Faces? },
  author={ Abu Sufian and Marco Leo and Cosimo Distante and Anirudha Ghosh and Debaditya Barman },
  journal={arXiv preprint arXiv:2506.05383},
  year={ 2025 }
}
Comments on this paper