ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12507
  4. Cited By
Distilling Internet-Scale Vision-Language Models into Embodied Agents

Distilling Internet-Scale Vision-Language Models into Embodied Agents

29 January 2023
T. Sumers
Kenneth Marino
Arun Ahuja
Rob Fergus
Ishita Dasgupta
    LM&Ro
ArXivPDFHTML

Papers citing "Distilling Internet-Scale Vision-Language Models into Embodied Agents"

28 / 28 papers shown
Title
GTR: Guided Thought Reinforcement Prevents Thought Collapse in RL-based VLM Agent Training
Tong Wei
Yijun Yang
Junliang Xing
Yuanchun Shi
Zongqing Lu
Deheng Ye
OffRL
LRM
46
1
0
11 Mar 2025
Generative Artificial Intelligence in Robotic Manipulation: A Survey
Anton van den Hengel
Peng Yun
Jun Cen
Junhao Cai
DiDi Zhu
...
Qifeng Chen
Jia Pan
Wei Zhang
Bo Yang
Hua Chen
59
1
0
05 Mar 2025
Embodied CoT Distillation From LLM To Off-the-shelf Agents
Embodied CoT Distillation From LLM To Off-the-shelf Agents
Wonje Choi
Woo Kyung Kim
Minjong Yoo
Honguk Woo
OffRL
LM&Ro
113
2
0
16 Dec 2024
VLM-Vac: Enhancing Smart Vacuums through VLM Knowledge Distillation and
  Language-Guided Experience Replay
VLM-Vac: Enhancing Smart Vacuums through VLM Knowledge Distillation and Language-Guided Experience Replay
Reihaneh Mirjalili
Michael Krawez
Florian Walter
Wolfram Burgard
34
0
0
21 Sep 2024
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
Yuwei Fu
Haichao Zhang
Di Wu
Wei-ping Xu
Benoit Boulet
VLM
24
12
0
02 Jun 2024
Explore until Confident: Efficient Exploration for Embodied Question
  Answering
Explore until Confident: Efficient Exploration for Embodied Question Answering
Allen Z. Ren
Jaden Clark
Anushri Dixit
Masha Itkina
Anirudha Majumdar
Dorsa Sadigh
42
28
0
23 Mar 2024
BAGEL: Bootstrapping Agents by Guiding Exploration with Language
BAGEL: Bootstrapping Agents by Guiding Exploration with Language
Shikhar Murty
Christopher D. Manning
Peter Shaw
Mandar Joshi
Kenton Lee
LM&Ro
LLMAG
28
14
0
12 Mar 2024
A Survey on Knowledge Distillation of Large Language Models
A Survey on Knowledge Distillation of Large Language Models
Xiaohan Xu
Ming Li
Chongyang Tao
Tao Shen
Reynold Cheng
Jinyang Li
Can Xu
Dacheng Tao
Dinesh Manocha
KELM
VLM
44
101
0
20 Feb 2024
Exploring the Reasoning Abilities of Multimodal Large Language Models
  (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Yiqi Wang
Wentao Chen
Xiaotian Han
Xudong Lin
Haiteng Zhao
Yongfei Liu
Bohan Zhai
Jianbo Yuan
Quanzeng You
Hongxia Yang
LRM
47
69
0
10 Jan 2024
Object-Centric Instruction Augmentation for Robotic Manipulation
Object-Centric Instruction Augmentation for Robotic Manipulation
Junjie Wen
Yichen Zhu
Minjie Zhu
Jinming Li
Zhiyuan Xu
...
Chaomin Shen
Yaxin Peng
Dong Liu
Feifei Feng
Jian Tang
LM&Ro
69
16
0
05 Jan 2024
MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile
  Devices
MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices
Xiangxiang Chu
Limeng Qiao
Xinyang Lin
Shuang Xu
Yang Yang
...
Fei Wei
Xinyu Zhang
Bo-Wen Zhang
Xiaolin Wei
Chunhua Shen
MLLM
36
34
0
28 Dec 2023
Vision-Language Models as a Source of Rewards
Vision-Language Models as a Source of Rewards
Kate Baumli
Satinder Baveja
Feryal M. P. Behbahani
Harris Chan
Gheorghe Comanici
...
Yannick Schroecker
Stephen Spencer
Richie Steigerwald
Luyu Wang
Lei Zhang
VLM
LRM
45
26
0
14 Dec 2023
Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld
Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld
Yijun Yang
Tianyi Zhou
Kanxue Li
Dapeng Tao
Lusong Li
Li Shen
Xiaodong He
Jing Jiang
Yuhui Shi
LLMAG
LM&Ro
30
34
0
28 Nov 2023
EgoThink: Evaluating First-Person Perspective Thinking Capability of
  Vision-Language Models
EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Language Models
Sijie Cheng
Zhicheng Guo
Jingwen Wu
Kechen Fang
Peng Li
Huaping Liu
Yang Liu
EgoV
LRM
31
16
0
27 Nov 2023
Vision-Language Models are Zero-Shot Reward Models for Reinforcement
  Learning
Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning
Juan Rocamonde
Victoriano Montesinos
Elvis Nava
Ethan Perez
David Lindner
VLM
33
76
0
19 Oct 2023
Cognitive Architectures for Language Agents
Cognitive Architectures for Language Agents
T. Sumers
Shunyu Yao
Karthik Narasimhan
Thomas L. Griffiths
LLMAG
LM&Ro
54
153
0
05 Sep 2023
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic
  Control
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Anthony Brohan
Noah Brown
Justice Carbajal
Yevgen Chebotar
Xi Chen
...
Ted Xiao
Peng-Tao Xu
Sichun Xu
Tianhe Yu
Brianna Zitkovich
LM&Ro
LRM
30
1,100
0
28 Jul 2023
Towards A Unified Agent with Foundation Models
Towards A Unified Agent with Foundation Models
Norman Di Palo
Arunkumar Byravan
Leonard Hasenclever
Markus Wulfmeier
N. Heess
Martin Riedmiller
LM&Ro
LLMAG
OffRL
35
58
0
18 Jul 2023
STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
Shalev Lifshitz
Keiran Paster
Harris Chan
Jimmy Ba
Sheila A. McIlraith
LM&Ro
24
67
0
01 Jun 2023
Improving Policy Learning via Language Dynamics Distillation
Improving Policy Learning via Language Dynamics Distillation
Victor Zhong
Jesse Mu
Luke Zettlemoyer
Edward Grefenstette
Tim Rocktaschel
OffRL
41
15
0
30 Sep 2022
ProgPrompt: Generating Situated Robot Task Plans using Large Language
  Models
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Ishika Singh
Valts Blukis
Arsalan Mousavian
Ankit Goyal
Danfei Xu
Jonathan Tremblay
D. Fox
Jesse Thomason
Animesh Garg
LM&Ro
LLMAG
120
624
0
22 Sep 2022
Open-vocabulary Queryable Scene Representations for Real World Planning
Open-vocabulary Queryable Scene Representations for Real World Planning
Boyuan Chen
F. Xia
Brian Ichter
Kanishka Rao
K. Gopalakrishnan
Michael S. Ryoo
Austin Stone
Daniel Kappler
LM&Ro
146
181
0
20 Sep 2022
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language,
  Vision, and Action
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Dhruv Shah
B. Osinski
Brian Ichter
Sergey Levine
LM&Ro
158
436
0
10 Jul 2022
ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings
ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings
Arjun Majumdar
Gunjan Aggarwal
Bhavika Devnani
Judy Hoffman
Dhruv Batra
LM&Ro
149
149
0
24 Jun 2022
Skill Induction and Planning with Latent Language
Skill Induction and Planning with Latent Language
Pratyusha Sharma
Antonio Torralba
Jacob Andreas
LM&Ro
202
108
0
04 Oct 2021
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday
  Household Tasks
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks
Chengshu Li
Fei Xia
Roberto Martín-Martín
Michael Lingelbach
S. Srivastava
...
Karen Liu
H. Gweon
Jiajun Wu
Li Fei-Fei
Silvio Savarese
LM&Ro
168
223
0
06 Aug 2021
Interactive Learning from Activity Description
Interactive Learning from Activity Description
Khanh Nguyen
Dipendra Kumar Misra
Robert Schapire
Miroslav Dudík
Patrick Shafto
47
34
0
13 Feb 2021
Speaker-Follower Models for Vision-and-Language Navigation
Speaker-Follower Models for Vision-and-Language Navigation
Daniel Fried
Ronghang Hu
Volkan Cirik
Anna Rohrbach
Jacob Andreas
Louis-Philippe Morency
Taylor Berg-Kirkpatrick
Kate Saenko
Dan Klein
Trevor Darrell
LM&Ro
LRM
260
498
0
07 Jun 2018
1