ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.01404
26
1

H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation

2 October 2023
Yanjie Ze
Yuyao Liu
Ruizhe Shi
Jiaxin Qin
Zhecheng Yuan
Jiashun Wang
Huazhe Xu
ArXivPDFHTML
Abstract

Human hands possess remarkable dexterity and have long served as a source of inspiration for robotic manipulation. In this work, we propose a human H\textbf{H}Hand-In\textbf{-In}-Informed visual representation learning framework to solve difficult Dex\textbf{Dex}Dexterous manipulation tasks (H-InDex\textbf{H-InDex}H-InDex) with reinforcement learning. Our framework consists of three stages: (i) pre-training representations with 3D human hand pose estimation, (ii) offline adapting representations with self-supervised keypoint detection, and (iii) reinforcement learning with exponential moving average BatchNorm. The last two stages only modify 0.36%0.36\%0.36% parameters of the pre-trained representation in total, ensuring the knowledge from pre-training is maintained to the full extent. We empirically study 12 challenging dexterous manipulation tasks and find that H-InDex largely surpasses strong baseline methods and the recent visual foundation models for motor control. Code is available at https://yanjieze.com/H-InDex .

View on arXiv
Comments on this paper