88
0

SwitchVLA: Execution-Aware Task Switching for Vision-Language-Action Models

Main:8 Pages
7 Figures
Bibliography:3 Pages
13 Tables
Appendix:8 Pages
Abstract

Robots deployed in dynamic environments must be able to not only follow diverse language instructions but flexibly adapt when user intent changes mid-execution. While recent Vision-Language-Action (VLA) models have advanced multi-task learning and instruction following, they typically assume static task intent, failing to respond when new instructions arrive during ongoing execution. This limitation hinders natural and robust interaction in dynamic settings, such as retail or household environments, where real-time intent changes are common. We propose SwitchVLA, a unified, execution-aware framework that enables smooth and reactive task switching without external planners or additional switch-specific data. We model task switching as a behavior modulation problem conditioned on execution state and instruction context. Expert demonstrations are segmented into temporally grounded contact phases, allowing the policy to infer task progress and adjust its behavior accordingly. A multi-behavior conditional policy is then trained to generate flexible action chunks under varying behavior modes through conditioned trajectory modeling. Experiments in both simulation and real-world robotic manipulation demonstrate that SwitchVLA enables robust instruction adherence, fluid task switching, and strong generalization-outperforming prior VLA baselines in both task success rate and interaction naturalness.

View on arXiv
@article{li2025_2506.03574,
  title={ SwitchVLA: Execution-Aware Task Switching for Vision-Language-Action Models },
  author={ Meng Li and Zhen Zhao and Zhengping Che and Fei Liao and Kun Wu and Zhiyuan Xu and Pei Ren and Zhao Jin and Ning Liu and Jian Tang },
  journal={arXiv preprint arXiv:2506.03574},
  year={ 2025 }
}
Comments on this paper