ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17867
30
0

Multi-task Learning For Joint Action and Gesture Recognition

23 May 2025
Konstantinos Spathis
N. Kardaris
Petros Maragos
ArXiv (abs)PDFHTML
Main:24 Pages
7 Figures
Bibliography:1 Pages
2 Tables
Appendix:1 Pages
Abstract

In practical applications, computer vision tasks often need to be addressed simultaneously. Multitask learning typically achieves this by jointly training a single deep neural network to learn shared representations, providing efficiency and improving generalization. Although action and gesture recognition are closely related tasks, since they focus on body and hand movements, current state-of-the-art methods handle them separately. In this paper, we show that employing a multi-task learning paradigm for action and gesture recognition results in more efficient, robust and generalizable visual representations, by leveraging the synergies between these tasks. Extensive experiments on multiple action and gesture datasets demonstrate that handling actions and gestures in a single architecture can achieve better performance for both tasks in comparison to their single-task learning variants.

View on arXiv
@article{spathis2025_2505.17867,
  title={ Multi-task Learning For Joint Action and Gesture Recognition },
  author={ Konstantinos Spathis and Nikolaos Kardaris and Petros Maragos },
  journal={arXiv preprint arXiv:2505.17867},
  year={ 2025 }
}
Comments on this paper