ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14727
7
0

Casper: Inferring Diverse Intents for Assistive Teleoperation with Vision Language Models

17 June 2025
Huihan Liu
Rutav Shah
Shuijing Liu
Jack Pittenger
Mingyo Seo
Yuchen Cui
Yonatan Bisk
Roberto Martín-Martín
Yuke Zhu
ArXiv (abs)PDFHTML
Main:8 Pages
8 Figures
Bibliography:7 Pages
2 Tables
Appendix:7 Pages
Abstract

Assistive teleoperation, where control is shared between a human and a robot, enables efficient and intuitive human-robot collaboration in diverse and unstructured environments. A central challenge in real-world assistive teleoperation is for the robot to infer a wide range of human intentions from user control inputs and to assist users with correct actions. Existing methods are either confined to simple, predefined scenarios or restricted to task-specific data distributions at training, limiting their support for real-world assistance. We introduce Casper, an assistive teleoperation system that leverages commonsense knowledge embedded in pre-trained visual language models (VLMs) for real-time intent inference and flexible skill execution. Casper incorporates an open-world perception module for a generalized understanding of novel objects and scenes, a VLM-powered intent inference mechanism that leverages commonsense reasoning to interpret snippets of teleoperated user input, and a skill library that expands the scope of prior assistive teleoperation systems to support diverse, long-horizon mobile manipulation tasks. Extensive empirical evaluation, including human studies and system ablations, demonstrates that Casper improves task performance, reduces human cognitive load, and achieves higher user satisfaction than direct teleoperation and assistive teleoperation baselines.

View on arXiv
@article{liu2025_2506.14727,
  title={ Casper: Inferring Diverse Intents for Assistive Teleoperation with Vision Language Models },
  author={ Huihan Liu and Rutav Shah and Shuijing Liu and Jack Pittenger and Mingyo Seo and Yuchen Cui and Yonatan Bisk and Roberto Martín-Martín and Yuke Zhu },
  journal={arXiv preprint arXiv:2506.14727},
  year={ 2025 }
}
Comments on this paper