ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14754
7
0

Tactile Beyond Pixels: Multisensory Touch Representations for Robot Manipulation

17 June 2025
Carolina Higuera
Akash Sharma
Taosha Fan
Chaithanya Krishna Bodduluri
Byron Boots
Michael Kaess
Mike Lambeta
Tingfan Wu
Zixi Liu
Francois Robert Hogan
Mustafa Mukadam
ArXiv (abs)PDFHTML
Main:8 Pages
16 Figures
Bibliography:5 Pages
Appendix:6 Pages
Abstract

We present Sparsh-X, the first multisensory touch representations across four tactile modalities: image, audio, motion, and pressure. Trained on ~1M contact-rich interactions collected with the Digit 360 sensor, Sparsh-X captures complementary touch signals at diverse temporal and spatial scales. By leveraging self-supervised learning, Sparsh-X fuses these modalities into a unified representation that captures physical properties useful for robot manipulation tasks. We study how to effectively integrate real-world touch representations for both imitation learning and tactile adaptation of sim-trained policies, showing that Sparsh-X boosts policy success rates by 63% over an end-to-end model using tactile images and improves robustness by 90% in recovering object states from touch. Finally, we benchmark Sparsh-X ability to make inferences about physical properties, such as object-action identification, material-quantity estimation, and force estimation. Sparsh-X improves accuracy in characterizing physical properties by 48% compared to end-to-end approaches, demonstrating the advantages of multisensory pretraining for capturing features essential for dexterous manipulation.

View on arXiv
@article{higuera2025_2506.14754,
  title={ Tactile Beyond Pixels: Multisensory Touch Representations for Robot Manipulation },
  author={ Carolina Higuera and Akash Sharma and Taosha Fan and Chaithanya Krishna Bodduluri and Byron Boots and Michael Kaess and Mike Lambeta and Tingfan Wu and Zixi Liu and Francois Robert Hogan and Mustafa Mukadam },
  journal={arXiv preprint arXiv:2506.14754},
  year={ 2025 }
}
Comments on this paper