ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.01378
  4. Cited By
Vision-Language Foundation Models as Effective Robot Imitators

Vision-Language Foundation Models as Effective Robot Imitators

2 November 2023
Xinghang Li
Minghuan Liu
Hanbo Zhang
Cunjun Yu
Jie Xu
Hongtao Wu
Chi-Hou Cheang
Ya Jing
Weinan Zhang
Huaping Liu
Hang Li
Tao Kong
    LM&Ro
ArXivPDFHTML

Papers citing "Vision-Language Foundation Models as Effective Robot Imitators"

11 / 111 papers shown
Title
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language
  Navigation
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation
Jiazhao Zhang
Kunyu Wang
Rongtao Xu
Gengze Zhou
Yicong Hong
Xiaomeng Fang
Qi Wu
Zhizheng Zhang
Wang He
LM&Ro
40
45
0
24 Feb 2024
SInViG: A Self-Evolving Interactive Visual Agent for Human-Robot
  Interaction
SInViG: A Self-Evolving Interactive Visual Agent for Human-Robot Interaction
Jie Xu
Hanbo Zhang
Xinghang Li
Huaping Liu
Xuguang Lan
Tao Kong
LM&Ro
38
3
0
19 Feb 2024
3D Diffuser Actor: Policy Diffusion with 3D Scene Representations
3D Diffuser Actor: Policy Diffusion with 3D Scene Representations
Tsung-Wei Ke
N. Gkanatsios
Katerina Fragkiadaki
VGen
39
108
0
16 Feb 2024
An Interactive Agent Foundation Model
An Interactive Agent Foundation Model
Zane Durante
Bidipta Sarkar
Ran Gong
Rohan Taori
Yusuke Noda
...
Katsushi Ikeuchi
Fei-Fei Li
Jianfeng Gao
Naoki Wake
Qiuyuan Huang
LM&Ro
AI4CE
LLMAG
91
16
0
08 Feb 2024
CLIP feature-based randomized control using images and text for multiple
  tasks and robots
CLIP feature-based randomized control using images and text for multiple tasks and robots
Kazuki Shibata
Hideki Deguchi
Shun Taguchi
26
1
0
18 Jan 2024
QUAR-VLA: Vision-Language-Action Model for Quadruped Robots
QUAR-VLA: Vision-Language-Action Model for Quadruped Robots
Pengxiang Ding
Han Zhao
Wenxuan Song
Zhitao Wang
Zhenyu Wei
Shangke Lyu
Ningxi Yang
Donglin Wang
34
19
0
22 Dec 2023
Human Demonstrations are Generalizable Knowledge for Robots
Human Demonstrations are Generalizable Knowledge for Robots
Te Cui
Guangyan Chen
Tianxing Zhou
Zicai Peng
Mengxiao Hu
Haoyang Lu
Haizhou Li
Meiling Wang
Yi Yang
Yufeng Yue
LM&Ro
41
6
0
05 Dec 2023
GPT-4V(ision) for Robotics: Multimodal Task Planning from Human
  Demonstration
GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration
Naoki Wake
Atsushi Kanehira
Kazuhiro Sasabuchi
Jun Takamatsu
Katsushi Ikeuchi
LM&Ro
23
61
0
20 Nov 2023
Language-Conditioned Imitation Learning with Base Skill Priors under
  Unstructured Data
Language-Conditioned Imitation Learning with Base Skill Priors under Unstructured Data
Hongkuan Zhou
Zhenshan Bing
Xiangtong Yao
Xiaojie Su
Chenguang Yang
Kai-Qi Huang
Alois C. Knoll
LM&Ro
47
18
0
30 May 2023
Instruction Tuning with GPT-4
Instruction Tuning with GPT-4
Baolin Peng
Chunyuan Li
Pengcheng He
Michel Galley
Jianfeng Gao
SyDa
ALM
LM&MA
165
579
0
06 Apr 2023
Grounding Language with Visual Affordances over Unstructured Data
Grounding Language with Visual Affordances over Unstructured Data
Oier Mees
Jessica Borja-Diaz
Wolfram Burgard
LM&Ro
121
108
0
04 Oct 2022
Previous
123