100

Isolated Sign Language Recognition with Segmentation and Pose Estimation

Daniel Perkins
Davis Hunter
Dhrumil Patel
Galen Flanagan
Main:5 Pages
9 Figures
Bibliography:1 Pages
3 Tables
Appendix:1 Pages
Abstract

The recent surge in large language models has automated translations of spoken and written languages. However, these advances remain largely inaccessible to American Sign Language (ASL) users, whose language relies on complex visual cues. Isolated sign language recognition (ISLR) - the task of classifying videos of individual signs - can help bridge this gap but is currently limited by scarce per-sign data, high signer variability, and substantial computational costs. We propose a model for ISLR that reduces computational requirements while maintaining robustness to signer variation. Our approach integrates (i) a pose estimation pipeline to extract hand and face joint coordinates, (ii) a segmentation module that isolates relevant information, and (iii) a ResNet-Transformer backbone to jointly model spatial and temporal dependencies.

View on arXiv
Comments on this paper