ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12447
16
0
v1v2 (latest)

CLIP-HandID: Vision-Language Model for Hand-Based Person Identification

14 June 2025
Nathanael L. Baisa
Babu Pallam
Amudhavel Jayavel
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:2 Pages
1 Tables
Abstract

This paper introduces a new approach to person identification based on hand images, designed specifically for criminal investigations. The method is particularly valuable in serious crimes like sexual abuse, where hand images are often the sole identifiable evidence available. Our proposed method, CLIP-HandID, leverages pre-trained foundational vision-language model, particularly CLIP, to efficiently learn discriminative deep feature representations from hand images given as input to the image encoder of CLIP using textual prompts as semantic guidance. We propose to learn pseudo-tokens that represent specific visual contexts or appearance attributes using textual inversion network since labels of hand images are indexes instead text descriptions. The learned pseudo-tokens are incorporated into textual prompts which are given as input to the text encoder of the CLIP to leverage its multi-modal reasoning to enhance its generalization for identification. Through extensive evaluations on two large, publicly available hand datasets with multi-ethnic representation, we show that our method substantially surpasses existing approaches.

View on arXiv
@article{baisa2025_2506.12447,
  title={ CLIP-HandID: Vision-Language Model for Hand-Based Person Identification },
  author={ Nathanael L. Baisa and Babu Pallam and Amudhavel Jayavel },
  journal={arXiv preprint arXiv:2506.12447},
  year={ 2025 }
}
Comments on this paper