ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14931
45
0

Colors Matter: AI-Driven Exploration of Human Feature Colors

20 May 2025
Rama Alyoubi
Taif Alharbi
Albatul Alghamdi
Yara Alshehri
Elham Alghamdi
ArXiv (abs)PDFHTML
Main:15 Pages
8 Figures
Bibliography:3 Pages
11 Tables
Abstract

This study presents a robust framework that leverages advanced imaging techniques and machine learning for feature extraction and classification of key human attributes-namely skin tone, hair color, iris color, and vein-based undertones. The system employs a multi-stage pipeline involving face detection, region segmentation, and dominant color extraction to isolate and analyze these features. Techniques such as X-means clustering, alongside perceptually uniform distance metrics like Delta E (CIEDE2000), are applied within both LAB and HSV color spaces to enhance the accuracy of color differentiation. For classification, the dominant tones of the skin, hair, and iris are extracted and matched to a custom tone scale, while vein analysis from wrist images enables undertone classification into "Warm" or "Cool" based on LAB differences. Each module uses targeted segmentation and color space transformations to ensure perceptual precision. The system achieves up to 80% accuracy in tone classification using the Delta E-HSV method with Gaussian blur, demonstrating reliable performance across varied lighting and image conditions. This work highlights the potential of AI-powered color analysis and feature extraction for delivering inclusive, precise, and nuanced classification, supporting applications in beauty technology, digital personalization, and visual analytics.

View on arXiv
@article{alyoubi2025_2505.14931,
  title={ Colors Matter: AI-Driven Exploration of Human Feature Colors },
  author={ Rama Alyoubi and Taif Alharbi and Albatul Alghamdi and Yara Alshehri and Elham Alghamdi },
  journal={arXiv preprint arXiv:2505.14931},
  year={ 2025 }
}
Comments on this paper