16
0
v1v2 (latest)

AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making

Main:9 Pages
12 Figures
Bibliography:9 Pages
3 Tables
Appendix:15 Pages
Abstract

Vision-Language Models (VLMs) encode knowledge and reasoning capabilities for robotic manipulation within high-dimensional representation spaces. However, current approaches often project them into compressed intermediate representations, discarding important task-specific information such as fine-grained spatial or semantic details. To address this, we propose AntiGrounding, a new framework that reverses the instruction grounding process. It lifts candidate actions directly into the VLM representation space, renders trajectories from multiple views, and uses structured visual question answering for instruction-based decision making. This enables zero-shot synthesis of optimal closed-loop robot trajectories for new tasks. We also propose an offline policy refinement module that leverages past experience to enhance long-term performance. Experiments in both simulation and real-world environments show that our method outperforms baselines across diverse robotic manipulation tasks.

View on arXiv
@article{li2025_2506.12374,
  title={ AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making },
  author={ Wenbo Li and Shiyi Wang and Yiteng Chen and Huiping Zhuang and Qingyao Wu },
  journal={arXiv preprint arXiv:2506.12374},
  year={ 2025 }
}
Comments on this paper