ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.10151
  4. Cited By
REVERIE: Remote Embodied Visual Referring Expression in Real Indoor
  Environments

REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments

23 April 2019
Yuankai Qi
Qi Wu
Peter Anderson
Qing Guo
Luu Anh Tuan
Chunhua Shen
Anton Van Den Hengel
    LM&Ro
ArXivPDFHTML

Papers citing "REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments"

34 / 84 papers shown
Title
Target-Driven Structured Transformer Planner for Vision-Language
  Navigation
Target-Driven Structured Transformer Planner for Vision-Language Navigation
Yusheng Zhao
Jinyu Chen
Chen Gao
Wenguan Wang
Lirong Yang
Haibing Ren
Huaxia Xia
Si Liu
LM&Ro
27
57
0
19 Jul 2022
CLEAR: Improving Vision-Language Navigation with Cross-Lingual,
  Environment-Agnostic Representations
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations
Jialu Li
Hao Tan
Joey Tianyi Zhou
LM&Ro
64
12
0
05 Jul 2022
VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Kai Zheng
Xiaotong Chen
Odest Chadwicke Jenkins
Qing Guo
LM&Ro
CoGe
21
54
0
17 Jun 2022
FedVLN: Privacy-preserving Federated Vision-and-Language Navigation
FedVLN: Privacy-preserving Federated Vision-and-Language Navigation
Kaiwen Zhou
Qing Guo
FedML
26
8
0
28 Mar 2022
CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot
  Object Navigation
CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
S. Gadre
Mitchell Wortsman
Gabriel Ilharco
Ludwig Schmidt
Shuran Song
CLIP
LM&Ro
41
142
0
20 Mar 2022
Visual-Language Navigation Pretraining via Prompt-based Environmental
  Self-exploration
Visual-Language Navigation Pretraining via Prompt-based Environmental Self-exploration
Xiwen Liang
Fengda Zhu
Lingling Li
Hang Xu
Xiaodan Liang
LM&Ro
VLM
33
29
0
08 Mar 2022
Bridging the Gap Between Learning in Discrete and Continuous
  Environments for Vision-and-Language Navigation
Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation
Yicong Hong
Zun Wang
Qi Wu
Stephen Gould
3DV
29
64
0
05 Mar 2022
CAISE: Conversational Agent for Image Search and Editing
CAISE: Conversational Agent for Image Search and Editing
Hyounghun Kim
Doo Soon Kim
Seunghyun Yoon
Franck Dernoncourt
Trung Bui
Joey Tianyi Zhou
24
6
0
24 Feb 2022
Think Global, Act Local: Dual-scale Graph Transformer for
  Vision-and-Language Navigation
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation
Shizhe Chen
Pierre-Louis Guhur
Makarand Tapaswi
Cordelia Schmid
Ivan Laptev
LM&Ro
30
139
0
23 Feb 2022
Curriculum Learning for Vision-and-Language Navigation
Curriculum Learning for Vision-and-Language Navigation
Jiwen Zhang
Zhongyu Wei
Jianqing Fan
J. Peng
LM&Ro
26
21
0
14 Nov 2021
SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language
  Navigation
SOAT: A Scene- and Object-Aware Transformer for Vision-and-Language Navigation
A. Moudgil
Arjun Majumdar
Harsh Agrawal
Stefan Lee
Dhruv Batra
LM&Ro
27
57
0
27 Oct 2021
A Framework for Learning to Request Rich and Contextually Useful
  Information from Humans
A Framework for Learning to Request Rich and Contextually Useful Information from Humans
Khanh Nguyen
Yonatan Bisk
Hal Daumé
47
16
0
14 Oct 2021
Audio-Visual Grounding Referring Expression for Robotic Manipulation
Audio-Visual Grounding Referring Expression for Robotic Manipulation
Yefei Wang
Kaili Wang
Yi Wang
Di Guo
Huaping Liu
F. Sun
38
12
0
22 Sep 2021
Adversarial Reinforced Instruction Attacker for Robust Vision-Language
  Navigation
Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation
Bingqian Lin
Yi Zhu
Yanxin Long
Xiaodan Liang
QiXiang Ye
Liang Lin
AAML
39
16
0
23 Jul 2021
How Much Can CLIP Benefit Vision-and-Language Tasks?
How Much Can CLIP Benefit Vision-and-Language Tasks?
Sheng Shen
Liunian Harold Li
Hao Tan
Joey Tianyi Zhou
Anna Rohrbach
Kai-Wei Chang
Z. Yao
Kurt Keutzer
CLIP
VLM
MLLM
202
405
0
13 Jul 2021
LanguageRefer: Spatial-Language Model for 3D Visual Grounding
LanguageRefer: Spatial-Language Model for 3D Visual Grounding
Junha Roh
Karthik Desingh
Ali Farhadi
Dieter Fox
22
95
0
07 Jul 2021
Core Challenges in Embodied Vision-Language Planning
Core Challenges in Embodied Vision-Language Planning
Jonathan M Francis
Nariaki Kitamura
Felix Labelle
Xiaopeng Lu
Ingrid Navarro
Jean Oh
LM&Ro
47
45
0
26 Jun 2021
The Road to Know-Where: An Object-and-Room Informed Sequential BERT for
  Indoor Vision-Language Navigation
The Road to Know-Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation
Yuankai Qi
Zizheng Pan
Yicong Hong
Ming-Hsuan Yang
Anton Van Den Hengel
Qi Wu
LM&Ro
29
68
0
09 Apr 2021
SOON: Scenario Oriented Object Navigation with Graph-based Exploration
SOON: Scenario Oriented Object Navigation with Graph-based Exploration
Fengda Zhu
Xiwen Liang
Yi Zhu
Xiaojun Chang
Xiaodan Liang
27
122
0
31 Mar 2021
Diagnosing Vision-and-Language Navigation: What Really Matters
Diagnosing Vision-and-Language Navigation: What Really Matters
Wanrong Zhu
Yuankai Qi
P. Narayana
Kazoo Sone
Sugato Basu
Qing Guo
Qi Wu
Miguel P. Eckstein
Luu Anh Tuan
LM&Ro
27
50
0
30 Mar 2021
Scene-Intuitive Agent for Remote Embodied Visual Grounding
Scene-Intuitive Agent for Remote Embodied Visual Grounding
Xiangru Lin
Guanbin Li
Yizhou Yu
LM&Ro
22
52
0
24 Mar 2021
Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD
  Images
Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images
Haolin Liu
Anran Lin
Xiaoguang Han
Lei Yang
Yizhou Yu
Shuguang Cui
27
40
0
14 Mar 2021
Are We There Yet? Learning to Localize in Embodied Instruction Following
Are We There Yet? Learning to Localize in Embodied Instruction Following
Shane Storks
Qiaozi Gao
Govind Thattai
Gokhan Tur
LM&Ro
45
11
0
09 Jan 2021
Semantics for Robotic Mapping, Perception and Interaction: A Survey
Semantics for Robotic Mapping, Perception and Interaction: A Survey
Sourav Garg
Niko Sünderhauf
Feras Dayoub
D. Morrison
Akansel Cosgun
...
Tat-Jun Chin
Ian Reid
Stephen Gould
Peter Corke
Michael Milford
24
115
0
02 Jan 2021
Where Are You? Localization from Embodied Dialog
Where Are You? Localization from Embodied Dialog
Meera Hahn
Jacob Krantz
Dhruv Batra
Devi Parikh
James M. Rehg
Stefan Lee
Peter Anderson
LM&Ro
22
27
0
16 Nov 2020
Sim-to-Real Transfer for Vision-and-Language Navigation
Sim-to-Real Transfer for Vision-and-Language Navigation
Peter Anderson
Ayush Shrivastava
Joanne Truong
Arjun Majumdar
Devi Parikh
Dhruv Batra
Stefan Lee
LM&Ro
36
106
0
07 Nov 2020
Language and Visual Entity Relationship Graph for Agent Navigation
Language and Visual Entity Relationship Graph for Agent Navigation
Yicong Hong
Cristian Rodriguez-Opazo
Yuankai Qi
Qi Wu
Stephen Gould
LM&Ro
179
132
0
19 Oct 2020
Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense
  Spatiotemporal Grounding
Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding
Alexander Ku
Peter Anderson
Roma Patel
Eugene Ie
Jason Baldridge
43
301
0
15 Oct 2020
Referring Expression Comprehension: A Survey of Methods and Datasets
Referring Expression Comprehension: A Survey of Methods and Datasets
Yanyuan Qiao
Chaorui Deng
Qi Wu
ObjD
50
93
0
19 Jul 2020
ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language
ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language
Dave Zhenyu Chen
Angel X. Chang
Matthias Nießner
3DPC
32
347
0
18 Dec 2019
Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
Tsu-jui Fu
Qing Guo
Matthew F. Peterson
Scott T. Grafton
Miguel P. Eckstein
William Yang Wang
57
41
0
17 Nov 2019
Help, Anna! Visual Navigation with Natural Multimodal Assistance via
  Retrospective Curiosity-Encouraging Imitation Learning
Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning
Khanh Nguyen
Hal Daumé
LM&Ro
EgoV
180
150
0
04 Sep 2019
Speaker-Follower Models for Vision-and-Language Navigation
Speaker-Follower Models for Vision-and-Language Navigation
Daniel Fried
Ronghang Hu
Volkan Cirik
Anna Rohrbach
Jacob Andreas
Louis-Philippe Morency
Taylor Berg-Kirkpatrick
Kate Saenko
Dan Klein
Trevor Darrell
LM&Ro
LRM
260
498
0
07 Jun 2018
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,925
0
17 Aug 2015
Previous
12