467
0

OLMD: Orientation-aware Long-term Motion Decoupling for Continuous Sign Language Recognition

Abstract

The primary challenge in continuous sign language recognition (CSLR) mainly stems from the presence of multi-orientational and long-term motions. However, current research overlooks these crucial aspects, significantly impacting accuracy. To tackle these issues, we propose a novel CSLR framework: Orientation-aware Long-term Motion Decoupling (OLMD), which efficiently aggregates long-term motions and decouples multi-orientational signals into easily interpretable components. Specifically, our innovative Long-term Motion Aggregation (LMA) module filters out static redundancy while adaptively capturing abundant features of long-term motions. We further enhance orientation awareness by decoupling complex movements into horizontal and vertical components, allowing for motion purification in both orientations. Additionally, two coupling mechanisms are proposed: stage and cross-stage coupling, which together enrich multi-scale features and improve the generalization capabilities of the model. Experimentally, OLMD shows SOTA performance on three large-scale datasets: PHOENIX14, PHOENIX14-T, and CSL-Daily. Notably, we improved the word error rate (WER) on PHOENIX14 by an absolute 1.6% compared to the previous SOTA

View on arXiv
@article{yu2025_2503.08205,
  title={ OLMD: Orientation-aware Long-term Motion Decoupling for Continuous Sign Language Recognition },
  author={ Yiheng Yu and Sheng Liu and Yuan Feng and Min Xu and Zhelun Jin and Xuhua Yang },
  journal={arXiv preprint arXiv:2503.08205},
  year={ 2025 }
}
Main:7 Pages
7 Figures
Bibliography:2 Pages
8 Tables
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.