Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2502.13508
Cited By
VLAS: Vision-Language-Action Model With Speech Instructions For Customized Robot Manipulation
24 February 2025
Wei Zhao
Pengxiang Ding
M. Zhang
Zhefei Gong
Shuanghao Bai
H. Zhao
Donglin Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"VLAS: Vision-Language-Action Model With Speech Instructions For Customized Robot Manipulation"
5 / 5 papers shown
Title
Interleave-VLA: Enhancing Robot Manipulation with Interleaved Image-Text Instructions
Cunxin Fan
Xiaosong Jia
Yihang Sun
Yixiao Wang
Jianglan Wei
...
Xiangyu Zhao
M. Tomizuka
Xue Yang
Junchi Yan
Mingyu Ding
LM&Ro
VLM
66
2
0
04 May 2025
OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision Language Action Model
Xingcheng Zhou
Xuyuan Han
Feng Yang
Yunpu Ma
Alois C. Knoll
VLM
53
1
0
30 Mar 2025
PointVLA: Injecting the 3D World into Vision-Language-Action Models
Chengmeng Li
Junjie Wen
Yan Peng
Yaxin Peng
Feifei Feng
Y. X. Zhu
3DPC
69
3
0
10 Mar 2025
Accelerating Vision-Language-Action Model Integrated with Action Chunking via Parallel Decoding
Wenxuan Song
Jiayi Chen
Pengxiang Ding
H. Zhao
Wei Zhao
Zhide Zhong
Zongyuan Ge
Jun Ma
Haoang Li
43
3
0
04 Mar 2025
DexVLA: Vision-Language Model with Plug-In Diffusion Expert for General Robot Control
Junjie Wen
Y. X. Zhu
Jinming Li
Zhibin Tang
Chaomin Shen
Feifei Feng
VLM
53
12
0
09 Feb 2025
1