14
0
v1v2 (latest)

CLIP-HandID: Vision-Language Model for Hand-Based Person Identification

Main:4 Pages
3 Figures
Bibliography:2 Pages
1 Tables
Abstract

This paper introduces a novel approach to person identification using hand images, designed specifically for criminal investigations. The method is particularly valuable in serious crimes such as sexual abuse, where hand images are often the only identifiable evidence available. Our proposed method, CLIP-HandID, leverages a pre-trained foundational vision-language model - CLIP - to efficiently learn discriminative deep feature representations from hand images (input to CLIP's image encoder) using textual prompts as semantic guidance. Since hand images are labeled with indexes rather than text descriptions, we employ a textual inversion network to learn pseudo-tokens that encode specific visual contexts or appearance attributes. These learned pseudo-tokens are then incorporated into textual prompts, which are fed into CLIP's text encoder to leverage its multi-modal reasoning and enhance generalization for identification. Through extensive evaluations on two large, publicly available hand datasets with multi-ethnic representation, we demonstrate that our method significantly outperforms existing approaches.

View on arXiv
@article{baisa2025_2506.12447,
  title={ CLIP-HandID: Vision-Language Model for Hand-Based Person Identification },
  author={ Nathanael L. Baisa and Babu Pallam and Amudhavel Jayavel },
  journal={arXiv preprint arXiv:2506.12447},
  year={ 2025 }
}
Comments on this paper