ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.03917
18
201

Depth-Based 3D Hand Pose Estimation: From Current Achievements to Future Goals

11 December 2017
Shanxin Yuan
Guillermo Garcia-Hernando
B. Stenger
Gyeongsik Moon
Ju Yong Chang
Kyoung Mu Lee
Pavlo Molchanov
Jan Kautz
S. Honari
Liuhao Ge
Junsong Yuan
Xinghao Chen
Guijin Wang
Fan Yang
Kai Akiyama
Yang Wu
Qingfu Wan
Meysam Madadi
Sergio Escalera
Shile Li
Dongheui Lee
Iason Oikonomidis
Antonis Argyros
Tae-Kyun Kim
    3DH
ArXivPDFHTML
Abstract

In this paper, we strive to answer two questions: What is the current state of 3D hand pose estimation from depth images? And, what are the next challenges that need to be tackled? Following the successful Hands In the Million Challenge (HIM2017), we investigate the top 10 state-of-the-art methods on three tasks: single frame 3D pose estimation, 3D hand tracking, and hand pose estimation during object interaction. We analyze the performance of different CNN structures with regard to hand shape, joint visibility, view point and articulation distributions. Our findings include: (1) isolated 3D hand pose estimation achieves low mean errors (10 mm) in the view point range of [70, 120] degrees, but it is far from being solved for extreme view points; (2) 3D volumetric representations outperform 2D CNNs, better capturing the spatial structure of the depth data; (3) Discriminative methods still generalize poorly to unseen hand shapes; (4) While joint occlusions pose a challenge for most methods, explicit modeling of structure constraints can significantly narrow the gap between errors on visible and occluded joints.

View on arXiv
Comments on this paper