30
0

Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling

Abstract

Vision language models (VLMs) exhibit vast knowledge of the physical world, including intuition of physical and spatial properties, affordances, and motion. With fine-tuning, VLMs can also natively produce robot trajectories. We demonstrate that eliciting wrenches, not trajectories, allows VLMs to explicitly reason about forces and leads to zero-shot generalization in a series of manipulation tasks without pretraining. We achieve this by overlaying a consistent visual representation of relevant coordinate frames on robot-attached camera images to augment our query. First, we show how this addition enables a versatile motion control framework evaluated across four tasks (opening and closing a lid, pushing a cup or chair) spanning prismatic and rotational motion, an order of force and position magnitude, different camera perspectives, annotation schemes, and two robot platforms over 220 experiments, resulting in 51% success across the four tasks. Then, we demonstrate that the proposed framework enables VLMs to continually reason about interaction feedback to recover from task failure or incompletion, with and without human supervision. Finally, we observe that prompting schemes with visual annotation and embodied reasoning can bypass VLM safeguards. We characterize prompt component contribution to harmful behavior elicitation and discuss its implications for developing embodied reasoning. Our code, videos, and data are available at:this https URL.

View on arXiv
@article{xie2025_2505.09731,
  title={ Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling },
  author={ William Xie and Max Conway and Yutong Zhang and Nikolaus Correll },
  journal={arXiv preprint arXiv:2505.09731},
  year={ 2025 }
}
Comments on this paper