ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.12191
  4. Cited By
AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors

AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors

15 February 2025
Ruoxuan Feng
Jiangyu Hu
Wenke Xia
Tianci Gao
Ao Shen
Yuhao Sun
Bin Fang
Di Hu
ArXivPDFHTML

Papers citing "AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors"

4 / 4 papers shown
Title
VTLA: Vision-Tactile-Language-Action Model with Preference Learning for Insertion Manipulation
VTLA: Vision-Tactile-Language-Action Model with Preference Learning for Insertion Manipulation
Chaofan Zhang
Peng Hao
Xiaoge Cao
Xiaoshuai Hao
Shaowei Cui
Shuo Wang
32
0
0
14 May 2025
CLTP: Contrastive Language-Tactile Pre-training for 3D Contact Geometry Understanding
CLTP: Contrastive Language-Tactile Pre-training for 3D Contact Geometry Understanding
Wenxuan Ma
Xiaoge Cao
Y. Zhang
Chaofan Zhang
Shaobo Yang
Peng Hao
Bin Fang
Yinghao Cai
Shaowei Cui
Shuo Wang
33
0
0
13 May 2025
SToLa: Self-Adaptive Touch-Language Framework with Tactile Commonsense Reasoning in Open-Ended Scenarios
SToLa: Self-Adaptive Touch-Language Framework with Tactile Commonsense Reasoning in Open-Ended Scenarios
Ning Cheng
Jinan Xu
Jialing Chen
Wenjuan Han
LRM
31
0
0
07 May 2025
General Force Sensation for Tactile Robot
Zhuo Chen
N. Ou
Xuyang Zhang
Z. Wu
Yongqiang Zhao
Y. Wang
Nathan Lepora
Lorenzo Jamone
Jiankang Deng
Shan Luo
34
0
0
02 Mar 2025
1