ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.14323
125
1
v1v2 (latest)

Binarized 3D Whole-body Human Mesh Recovery

24 November 2023
Zhiteng Li
Yulun Zhang
Jing Lin
Haotong Qin
Jinjin Gu
Xin Yuan
Linghe Kong
Xiaokang Yang
    3DH
ArXiv (abs)PDFHTML
Abstract

3D whole-body human mesh recovery aims to reconstruct the 3D human body, face, and hands from a single image. Although powerful deep learning models have achieved accurate estimation in this task, they require enormous memory and computational resources. Consequently, these methods can hardly be deployed on resource-limited edge devices. In this work, we propose a Binarized Dual Residual Network (BiDRN), a novel quantization method to estimate the 3D human body, face, and hands parameters efficiently. Specifically, we design a basic unit Binarized Dual Residual Block (BiDRB) composed of Local Convolution Residual (LCR) and Block Residual (BR), which can preserve full-precision information as much as possible. For LCR, we generalize it to four kinds of convolutional modules so that full-precision information can be propagated even between mismatched dimensions. We also binarize the face and hands box-prediction network as Binaried BoxNet, which can further reduce the model redundancy. Comprehensive quantitative and qualitative experiments demonstrate the effectiveness of BiDRN, which has a significant improvement over state-of-the-art binarization algorithms. Moreover, our proposed BiDRN achieves comparable performance with full-precision method Hand4Whole while using just 22.1% parameters and 14.8% operations. We will release all the code and pretrained models.

View on arXiv
@article{li2025_2311.14323,
  title={ BinaryHPE: 3D Human Pose and Shape Estimation via Binarization },
  author={ Zhiteng Li and Yulun Zhang and Jing Lin and Haotong Qin and Jinjin Gu and Xin Yuan and Linghe Kong and Xiaokang Yang },
  journal={arXiv preprint arXiv:2311.14323},
  year={ 2025 }
}
Comments on this paper