ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.13469
29
29

Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation

20 December 2023
Sudharshan Suresh
Haozhi Qi
Tingfan Wu
Taosha Fan
Luis Villaseñor-Pineda
Mike Lambeta
Jitendra Malik
Mrinal Kalakrishnan
Roberto Calandra
Michael Kaess
Joseph Ortiz
Mustafa Mukadam
ArXivPDFHTML
Abstract

To achieve human-level dexterity, robots must infer spatial awareness from multimodal sensing to reason over contact interactions. During in-hand manipulation of novel objects, such spatial awareness involves estimating the object's pose and shape. The status quo for in-hand perception primarily employs vision, and restricts to tracking a priori known objects. Moreover, visual occlusion of objects in-hand is imminent during manipulation, preventing current systems to push beyond tasks without occlusion. We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation. Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem. We study multimodal in-hand perception in simulation and the real-world, interacting with different objects via a proprioception-driven policy. Our experiments show final reconstruction F-scores of 818181% and average pose drifts of 4.7 mm4.7\,\text{mm}4.7mm, further reduced to 2.3 mm2.3\,\text{mm}2.3mm with known CAD models. Additionally, we observe that under heavy visual occlusion we can achieve up to 949494% improvements in tracking compared to vision-only methods. Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation. We release our evaluation dataset of 70 experiments, FeelSight, as a step towards benchmarking in this domain. Our neural representation driven by multimodal sensing can serve as a perception backbone towards advancing robot dexterity. Videos can be found on our project website https://suddhu.github.io/neural-feels/

View on arXiv
Comments on this paper