ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02152
  4. Cited By
Interleave-VLA: Enhancing Robot Manipulation with Interleaved Image-Text Instructions

Interleave-VLA: Enhancing Robot Manipulation with Interleaved Image-Text Instructions

4 May 2025
Cunxin Fan
Xiaosong Jia
Yihang Sun
Yixiao Wang
Jianglan Wei
Ziyang Gong
Xiangyu Zhao
M. Tomizuka
Xue Yang
Junchi Yan
Mingyu Ding
    LM&Ro
    VLM
ArXivPDFHTML

Papers citing "Interleave-VLA: Enhancing Robot Manipulation with Interleaved Image-Text Instructions"

3 / 3 papers shown
Title
Unveiling the Potential of Vision-Language-Action Models with Open-Ended Multimodal Instructions
Unveiling the Potential of Vision-Language-Action Models with Open-Ended Multimodal Instructions
Wei Zhao
Gongsheng Li
Zhefei Gong
Pengxiang Ding
H. Zhao
Donglin Wang
LM&Ro
19
0
0
16 May 2025
Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments
Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments
Pranav Guruprasad
Yangyue Wang
Sudipta Chowdhury
Harshvardhan Sikka
LM&Ro
VLM
153
0
0
08 May 2025
Vision-Language-Action Models: Concepts, Progress, Applications and Challenges
Vision-Language-Action Models: Concepts, Progress, Applications and Challenges
Ranjan Sapkota
Yang Cao
Konstantinos I Roumeliotis
Manoj Karkee
LM&Ro
157
1
0
07 May 2025
1