ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.07522
  4. Cited By
Audio Visual Language Maps for Robot Navigation

Audio Visual Language Maps for Robot Navigation

13 March 2023
Chen Huang
Oier Mees
Andy Zeng
Wolfram Burgard
    VGen
ArXivPDFHTML

Papers citing "Audio Visual Language Maps for Robot Navigation"

13 / 13 papers shown
Title
Flex: End-to-End Text-Instructed Visual Navigation from Foundation Model Features
Flex: End-to-End Text-Instructed Visual Navigation from Foundation Model Features
Makram Chahine
Alex Quach
Alaa Maalouf
Tsun-Hsuan Wang
Daniela Rus
26
0
0
16 Oct 2024
Robotic Control via Embodied Chain-of-Thought Reasoning
Robotic Control via Embodied Chain-of-Thought Reasoning
Michał Zawalski
William Chen
Karl Pertsch
Oier Mees
Chelsea Finn
Sergey Levine
LRM
LM&Ro
34
54
0
11 Jul 2024
Verifiably Following Complex Robot Instructions with Foundation Models
Verifiably Following Complex Robot Instructions with Foundation Models
Benedict Quartey
Eric Rosen
Stefanie Tellex
George Konidaris
LM&Ro
47
11
0
18 Feb 2024
ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and
  Planning
ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning
Yuanyi Zhong
Alihusein Kuwajerwala
Sacha Morin
Krishna Murthy Jatavallabhula
Bipasha Sen
...
Celso Miguel de Melo
Joshua B. Tenenbaum
Antonio Torralba
Florian Shkurti
Liam Paull
LM&Ro
36
166
0
28 Sep 2023
VL-Fields: Towards Language-Grounded Neural Implicit Spatial
  Representations
VL-Fields: Towards Language-Grounded Neural Implicit Spatial Representations
Nikolaos Tsagkas
Oisin Mac Aodha
Chris Xiaoxuan Lu
VLM
27
25
0
21 May 2023
Visual Language Maps for Robot Navigation
Visual Language Maps for Robot Navigation
Chen Huang
Oier Mees
Andy Zeng
Wolfram Burgard
LM&Ro
159
344
0
11 Oct 2022
CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
Nur Muhammad (Mahi) Shafiullah
Chris Paxton
Lerrel Pinto
Soumith Chintala
Arthur Szlam
VLM
LM&Ro
CLIP
95
156
0
11 Oct 2022
Grounding Language with Visual Affordances over Unstructured Data
Grounding Language with Visual Affordances over Unstructured Data
Oier Mees
Jessica Borja-Diaz
Wolfram Burgard
LM&Ro
121
108
0
04 Oct 2022
Open-vocabulary Queryable Scene Representations for Real World Planning
Open-vocabulary Queryable Scene Representations for Real World Planning
Boyuan Chen
F. Xia
Brian Ichter
Kanishka Rao
K. Gopalakrishnan
Michael S. Ryoo
Austin Stone
Daniel Kappler
LM&Ro
146
181
0
20 Sep 2022
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language,
  Vision, and Action
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Dhruv Shah
B. Osinski
Brian Ichter
Sergey Levine
LM&Ro
158
436
0
10 Jul 2022
From SLAM to Situational Awareness: Challenges and Survey
From SLAM to Situational Awareness: Challenges and Survey
Hriday Bavle
Jose Luis Sanchez-Lopez
Claudio Cimarelli
E. Schmidt
H. Voos
39
38
0
01 Oct 2021
Open-vocabulary Object Detection via Vision and Language Knowledge
  Distillation
Open-vocabulary Object Detection via Vision and Language Knowledge Distillation
Xiuye Gu
Nayeon Lee
Weicheng Kuo
Huayu Chen
VLM
ObjD
225
898
0
28 Apr 2021
Speaker-Follower Models for Vision-and-Language Navigation
Speaker-Follower Models for Vision-and-Language Navigation
Daniel Fried
Ronghang Hu
Volkan Cirik
Anna Rohrbach
Jacob Andreas
Louis-Philippe Morency
Taylor Berg-Kirkpatrick
Kate Saenko
Dan Klein
Trevor Darrell
LM&Ro
LRM
260
496
0
07 Jun 2018
1