One-Shot Gesture Recognition for Underwater Diver-To-Robot Communication
Reliable human-robot communication is essential for underwater human-robot interaction (U-HRI), yet traditional methods such as acoustic signaling and predefined gesture-based models suffer from limitations in adaptability and robustness. In this work, we propose One-Shot Gesture Recognition (OSG), a novel method that enables real-time, pose-based, temporal gesture recognition underwater from a single demonstration, eliminating the need for extensive dataset collection or model retraining. OSG leverages shape-based classification techniques, including Hu moments, Zernike moments, and Fourier descriptors, to robustly recognize gestures in visually-challenging underwater environments. Our system achieves high accuracy on real-world underwater data and operates efficiently on embedded hardware commonly found on autonomous underwater vehicles (AUVs), demonstrating its feasibility for deployment on-board robots. Compared to deep learning approaches, OSG is lightweight, computationally efficient, and highly adaptable, making it ideal for diver-to-robot communication. We evaluate OSG's performance on an augmented gesture dataset and real-world underwater video data, comparing its accuracy against deep learning methods. Our results show OSG's potential to enhance U-HRI by enabling the immediate deployment of user-defined gestures without the constraints of predefined gesture languages.
View on arXiv@article{joshi2025_2503.00676, title={ One-Shot Gesture Recognition for Underwater Diver-To-Robot Communication }, author={ Rishikesh Joshi and Junaed Sattar }, journal={arXiv preprint arXiv:2503.00676}, year={ 2025 } }