ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.06273
  4. Cited By
"Touching to See" and "Seeing to Feel": Robotic Cross-modal SensoryData
  Generation for Visual-Tactile Perception

"Touching to See" and "Seeing to Feel": Robotic Cross-modal SensoryData Generation for Visual-Tactile Perception

17 February 2019
Jet-Tsyn Lee
Danushka Bollegala
Shan Luo
ArXivPDFHTML

Papers citing ""Touching to See" and "Seeing to Feel": Robotic Cross-modal SensoryData Generation for Visual-Tactile Perception"

14 / 14 papers shown
Title
ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception
ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception
W. Z. E. Amri
Malte Kuhlmann
Nicolás Navarro-Guerrero
54
1
0
20 Feb 2025
Multi-modal perception for soft robotic interactions using generative
  models
Multi-modal perception for soft robotic interactions using generative models
Enrico Donato
Egidio Falotico
T. G. Thuruthel
39
2
0
05 Apr 2024
PseudoTouch: Efficiently Imaging the Surface Feel of Objects for Robotic Manipulation
PseudoTouch: Efficiently Imaging the Surface Feel of Objects for Robotic Manipulation
Adrian Rofer
Nick Heppert
Abdallah Ayman
Eugenio Chisari
Abhinav Valada
47
1
0
22 Mar 2024
Learn from Incomplete Tactile Data: Tactile Representation Learning with
  Masked Autoencoders
Learn from Incomplete Tactile Data: Tactile Representation Learning with Masked Autoencoders
G. Cao
Jiaqi Jiang
Danushka Bollegala
Shan Luo
33
12
0
14 Jul 2023
Beyond Flat GelSight Sensors: Simulation of Optical Tactile Sensors of
  Complex Morphologies for Sim2Real Learning
Beyond Flat GelSight Sensors: Simulation of Optical Tactile Sensors of Complex Morphologies for Sim2Real Learning
D. F. Gomes
Paolo Paoletti
Shan Luo
24
12
0
21 May 2023
Vis2Hap: Vision-based Haptic Rendering by Cross-modal Generation
Vis2Hap: Vision-based Haptic Rendering by Cross-modal Generation
G. Cao
Jiaqi Jiang
N. Mao
Danushka Bollegala
Min Li
Shan Luo
28
16
0
17 Jan 2023
Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with
  Multimodal Models
Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models
Zhiqiu Lin
Samuel Yu
Zhiyi Kuang
Deepak Pathak
Deva Ramana
VLM
22
102
0
16 Jan 2023
Where Shall I Touch? Vision-Guided Tactile Poking for Transparent Object
  Grasping
Where Shall I Touch? Vision-Guided Tactile Poking for Transparent Object Grasping
Jiaqi Jiang
G. Cao
Aaron Butterworth
Thanh-Toan Do
Shan Luo
23
27
0
20 Aug 2022
Visuo-Haptic Object Perception for Robots: An Overview
Visuo-Haptic Object Perception for Robots: An Overview
Nicolás Navarro-Guerrero
Sibel Toprak
Josip Josifovski
L. Jamone
30
36
0
22 Mar 2022
Reducing Tactile Sim2Real Domain Gaps via Deep Texture Generation
  Networks
Reducing Tactile Sim2Real Domain Gaps via Deep Texture Generation Networks
Tudor Jianu
D. F. Gomes
Shan Luo
44
22
0
03 Dec 2021
Sim-to-Real for Robotic Tactile Sensing via Physics-Based Simulation and
  Learned Latent Projections
Sim-to-Real for Robotic Tactile Sensing via Physics-Based Simulation and Learned Latent Projections
Yashraj S. Narang
Balakumar Sundaralingam
Miles Macklin
Arsalan Mousavian
Dieter Fox
30
58
0
31 Mar 2021
Generation of GelSight Tactile Images for Sim2Real Learning
Generation of GelSight Tactile Images for Sim2Real Learning
D. F. Gomes
Paolo Paoletti
Shan Luo
33
86
0
18 Jan 2021
Optimal Deep Learning for Robot Touch
Optimal Deep Learning for Robot Touch
Nathan Lepora
John Lloyd
48
51
0
04 Mar 2020
Making Sense of Vision and Touch: Learning Multimodal Representations
  for Contact-Rich Tasks
Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Michelle A. Lee
Yuke Zhu
Peter Zachares
Matthew Tan
K. Srinivasan
Silvio Savarese
Fei-Fei Li
Animesh Garg
Jeannette Bohg
SSL
23
208
0
28 Jul 2019
1