26
0

Semi-Supervised Spoken Language Glossification

Abstract

Spoken language glossification (SLG) aims to translate the spoken language text into the sign language gloss, i.e., a written record of sign language. In this work, we present a framework named SSemi-SSupervised SSpoken LLanguage GGlossification (S3S^3LG) for SLG. To tackle the bottleneck of limited parallel data in SLG, our S3S^3LG incorporates large-scale monolingual spoken language text into SLG training. The proposed framework follows the self-training structure that iteratively annotates and learns from pseudo labels. Considering the lexical similarity and syntactic difference between sign language and spoken language, our S3S^3LG adopts both the rule-based heuristic and model-based approach for auto-annotation. During training, we randomly mix these complementary synthetic datasets and mark their differences with a special token. As the synthetic data may be less quality, the S3S^3LG further leverages consistency regularization to reduce the negative impact of noise in the synthetic data. Extensive experiments are conducted on public benchmarks to demonstrate the effectiveness of the S3S^3LG. Our code is available at \url{https://github.com/yaohj11/S3LG}.

View on arXiv
Comments on this paper