ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.12514
60
41

TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation

19 September 2024
Junjie Wen
Y. X. Zhu
Jinming Li
Minjie Zhu
Kun Wu
Z. Xu
Ning Liu
Ran Cheng
Chaomin Shen
Yaxin Peng
Feifei Feng
Jian Tang
    LM&Ro
ArXivPDFHTML
Abstract

Vision-Language-Action (VLA) models have shown remarkable potential in visuomotor control and instruction comprehension through end-to-end learning processes. However, current VLA models face significant challenges: they are slow during inference and require extensive pre-training on large amounts of robotic data, making real-world deployment difficult. In this paper, we introduce a new family of compact vision-language-action models, called TinyVLA, which offers two key advantages over existing VLA models: (1) faster inference speeds, and (2) improved data efficiency, eliminating the need for pre-training stage. Our framework incorporates two essential components to build TinyVLA: (1) initializing the policy backbone with robust, high-speed multimodal models, and (2) integrating a diffusion policy decoder during fine-tuning to enable precise robot actions. We conducted extensive evaluations of TinyVLA in both simulation and on real robots, demonstrating that our approach significantly outperforms the state-of-the-art VLA model, OpenVLA, in terms of speed and data efficiency, while delivering comparable or superior performance. Additionally, TinyVLA exhibits strong generalization capabilities across various dimensions, including language instructions, novel objects, unseen positions, changes in object appearance, background variations, and environmental shifts, often matching or exceeding the performance of OpenVLA. We believe that \methodname offers an interesting perspective on utilizing pre-trained multimodal models for policy learning. Our project is atthis https URL.

View on arXiv
@article{wen2025_2409.12514,
  title={ TinyVLA: Towards Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation },
  author={ Junjie Wen and Yichen Zhu and Jinming Li and Minjie Zhu and Kun Wu and Zhiyuan Xu and Ning Liu and Ran Cheng and Chaomin Shen and Yaxin Peng and Feifei Feng and Jian Tang },
  journal={arXiv preprint arXiv:2409.12514},
  year={ 2025 }
}
Comments on this paper