62

RL-VLA3^3: Reinforcement Learning VLA Accelerating via Full Asynchronism

Zhong Guan
Haoran Sun
Yongjian Guo
Shuai Di
Xiaodong Bai
Jing Long
Tianyun Zhao
Mingxi Luo
Chen Zhou
Yucheng Guo
Qiming Yang
Wanting Xu
Wen Huang
Yunxuan Ma
Hongke Zhao
Likang Wu
Xiaotie Deng
Xi Xiao
Sheng Wen
Yicheng Gong
Junwu Xiong
Main:8 Pages
5 Figures
Bibliography:2 Pages
4 Tables
Appendix:3 Pages
Abstract

In recent years, Vision-Language-Action (VLA) models have emerged as a crucial pathway towards general embodied intelligence, yet their training efficiency has become a key bottleneck. Although existing reinforcement learning (RL)-based training frameworks like RLinf can enhance model generalization, they still rely on synchronous execution, leading to severe resource underutilization and throughput limitations during environment interaction, policy generation (rollout), and model update phases (actor). To overcome this challenge, this paper, for the first time, proposes and implements a fully-asynchronous policy training framework encompassing the entire pipeline from environment interaction, rollout generation, to actor policy updates. Systematically drawing inspiration from asynchronous optimization ideas in large model RL, our framework designs a multi-level decoupled architecture. This includes asynchronous parallelization of environment interaction and trajectory collection, streaming execution for policy generation, and decoupled scheduling for training updates. We validated the effectiveness of our method across diverse VLA models and environments. On the LIBERO benchmark, the framework achieves throughput improvements of up to 59.25\% compared to existing synchronous strategies. When deeply optimizing separation strategies, throughput can be increased by as much as 126.67\%. We verified the effectiveness of each asynchronous component via ablation studies. Scaling law validation across 8 to 256 GPUs demonstrates our method's excellent scalability under most conditions.

View on arXiv
Comments on this paper