455
1
v1v2 (latest)

Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments

Main:14 Pages
38 Figures
Bibliography:2 Pages
Appendix:5 Pages
Abstract

Vision-language-action (VLA) models represent an important step toward general-purpose robotic systems by integrating visual perception, language understanding, and action execution. However, systematic evaluation of these models, particularly their zero-shot generalization capabilities in procedurally out-of-distribution (OOD) environments, remains limited. In this paper, we introduce MultiNet v0.2, a comprehensive benchmark designed to evaluate and analyze the generalization performance of state-of-the-art VLMs and VLAs - including GPT-4o, GPT-4.1, OpenVLA, Pi0 Base, and Pi0 FAST - on diverse procedural tasks from the Procgen benchmark. Our analysis reveals several critical insights: (1) all evaluated models exhibit significant limitations in zero-shot generalization to OOD tasks, with performance heavily influenced by factors such as action representation and task complexity; (2) VLAs generally outperforms other models due to their robust architectural design; and (3) VLM variants demonstrate substantial improvements when constrained appropriately, highlighting the sensitivity of model performance to precise prompt engineering. We release our benchmark, evaluation framework, and findings to enable the assessment of future VLA models and identify critical areas for improvement in their application to out-of-distribution digital tasks.

View on arXiv
@article{guruprasad2025_2505.05540,
  title={ Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments },
  author={ Pranav Guruprasad and Yangyue Wang and Sudipta Chowdhury and Harshvardhan Sikka and Paul Pu Liang },
  journal={arXiv preprint arXiv:2505.05540},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.