24
0

Exploring Pose-based Sign Language Translation: Ablation Studies and Attention Insights

Tomas Zelezny
Jakub Straka
Vaclav Javorek
Ondrej Valach
Marek Hruz
Ivan Gruber
Main:8 Pages
18 Figures
Bibliography:2 Pages
5 Tables
Appendix:6 Pages
Abstract

Sign Language Translation (SLT) has evolved significantly, moving from isolated recognition approaches to complex, continuous gloss-free translation systems. This paper explores the impact of pose-based data preprocessing techniques - normalization, interpolation, and augmentation - on SLT performance. We employ a transformer-based architecture, adapting a modified T5 encoder-decoder model to process pose representations. Through extensive ablation studies on YouTubeASL and How2Sign datasets, we analyze how different preprocessing strategies affect translation accuracy. Our results demonstrate that appropriate normalization, interpolation, and augmentation techniques can significantly improve model robustness and generalization abilities. Additionally, we provide a deep analysis of the model's attentions and reveal interesting behavior suggesting that adding a dedicated register token can improve overall model performance. We publish our code on our GitHub repository, including the preprocessed YouTubeASL data.

View on arXiv
@article{zelezny2025_2507.01532,
  title={ Exploring Pose-based Sign Language Translation: Ablation Studies and Attention Insights },
  author={ Tomas Zelezny and Jakub Straka and Vaclav Javorek and Ondrej Valach and Marek Hruz and Ivan Gruber },
  journal={arXiv preprint arXiv:2507.01532},
  year={ 2025 }
}
Comments on this paper