ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.06453
  4. Cited By
Episodic Transformer for Vision-and-Language Navigation

Episodic Transformer for Vision-and-Language Navigation

13 May 2021
Alexander Pashevich
Cordelia Schmid
Chen Sun
    LM&Ro
ArXivPDFHTML

Papers citing "Episodic Transformer for Vision-and-Language Navigation"

39 / 139 papers shown
Title
Instruction-driven history-aware policies for robotic manipulations
Instruction-driven history-aware policies for robotic manipulations
Pierre-Louis Guhur
Shizhe Chen
Ricardo Garcia Pinel
Makarand Tapaswi
Ivan Laptev
Cordelia Schmid
LM&Ro
110
102
0
11 Sep 2022
On Grounded Planning for Embodied Tasks with Language Models
On Grounded Planning for Embodied Tasks with Language Models
Bill Yuchen Lin
Chengsong Huang
Qian Liu
Wenda Gu
Sam Sommerer
Xiang Ren
LM&Ro
34
39
0
29 Aug 2022
JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for
  Conversational Embodied Agents
JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents
Kai Zheng
KAI-QING Zhou
Jing Gu
Yue Fan
Jialu Wang
Zong-xiao Li
Xuehai He
Qing Guo
LM&Ro
25
39
0
28 Aug 2022
Learning from Unlabeled 3D Environments for Vision-and-Language
  Navigation
Learning from Unlabeled 3D Environments for Vision-and-Language Navigation
Shizhe Chen
Pierre-Louis Guhur
Makarand Tapaswi
Cordelia Schmid
Ivan Laptev
55
46
0
24 Aug 2022
MemoNav: Selecting Informative Memories for Visual Navigation
MemoNav: Selecting Informative Memories for Visual Navigation
Hongxin Li
Xueke Yang
Yu-Ren Yang
Shuqi Mei
Zhaoxiang Zhang
18
4
0
20 Aug 2022
Target-Driven Structured Transformer Planner for Vision-Language
  Navigation
Target-Driven Structured Transformer Planner for Vision-Language Navigation
Yusheng Zhao
Jinyu Chen
Chen Gao
Wenguan Wang
Lirong Yang
Haibing Ren
Huaxia Xia
Si Liu
LM&Ro
27
57
0
19 Jul 2022
1st Place Solutions for RxR-Habitat Vision-and-Language Navigation
  Competition (CVPR 2022)
1st Place Solutions for RxR-Habitat Vision-and-Language Navigation Competition (CVPR 2022)
Dongyan An
Zun Wang
Yangguang Li
Yi Wang
Yicong Hong
Yan Huang
Liang Wang
Jing Shao
24
14
0
23 Jun 2022
A Simple Approach for Visual Rearrangement: 3D Mapping and Semantic
  Search
A Simple Approach for Visual Rearrangement: 3D Mapping and Semantic Search
Brandon Trabucco
Gunnar A. Sigurdsson
Robinson Piramuthu
Gaurav Sukhatme
Ruslan Salakhutdinov
OCL
36
7
0
21 Jun 2022
EAGER: Asking and Answering Questions for Automatic Reward Shaping in
  Language-guided RL
EAGER: Asking and Answering Questions for Automatic Reward Shaping in Language-guided RL
Thomas Carta
Pierre-Yves Oudeyer
Olivier Sigaud
Sylvain Lamprier
OffRL
23
24
0
20 Jun 2022
VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Kai Zheng
Xiaotong Chen
Odest Chadwicke Jenkins
Qing Guo
LM&Ro
CoGe
21
54
0
17 Jun 2022
Multimodal Learning with Transformers: A Survey
Multimodal Learning with Transformers: A Survey
P. Xu
Xiatian Zhu
David A. Clifton
ViT
57
527
0
13 Jun 2022
Aerial Vision-and-Dialog Navigation
Aerial Vision-and-Dialog Navigation
Yue Fan
Winson X. Chen
Tongzhou Jiang
Chun-ni Zhou
Yi Zhang
Qing Guo
44
19
0
24 May 2022
On the Limits of Evaluating Embodied Agent Model Generalization Using
  Validation Sets
On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets
Hyounghun Kim
Aishwarya Padmakumar
Di Jin
Joey Tianyi Zhou
Dilek Z. Hakkani-Tür
6
0
0
18 May 2022
P3IV: Probabilistic Procedure Planning from Instructional Videos with
  Weak Supervision
P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision
Henghui Zhao
Isma Hadji
Nikita Dvornik
Konstantinos G. Derpanis
Richard P. Wildes
Allan D. Jepson
28
45
0
04 May 2022
On the Importance of Karaka Framework in Multi-modal Grounding
On the Importance of Karaka Framework in Multi-modal Grounding
Sai Kiran Gorthi
R. Mamidi
22
1
0
09 Apr 2022
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Michael Ahn
Anthony Brohan
Noah Brown
Yevgen Chebotar
Omar Cortes
...
Ted Xiao
Peng-Tao Xu
Sichun Xu
Mengyuan Yan
Andy Zeng
LM&Ro
45
1,845
0
04 Apr 2022
Moment-based Adversarial Training for Embodied Language Comprehension
Moment-based Adversarial Training for Embodied Language Comprehension
Shintaro Ishikawa
K. Sugiura
LM&Ro
43
8
0
02 Apr 2022
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future
  Directions
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
Jing Gu
Eliana Stefani
Qi Wu
Jesse Thomason
Qing Guo
LM&Ro
30
104
0
22 Mar 2022
Summarizing a virtual robot's past actions in natural language
Summarizing a virtual robot's past actions in natural language
Chad DeChant
Daniel Bauer
LM&Ro
28
4
0
13 Mar 2022
Cross-modal Map Learning for Vision and Language Navigation
Cross-modal Map Learning for Vision and Language Navigation
G. Georgakis
Karl Schmeckpeper
Karan Wanchoo
Soham Dan
E. Miltsakaki
Dan Roth
Kostas Daniilidis
22
64
0
10 Mar 2022
LEBP -- Language Expectation & Binding Policy: A Two-Stream Framework
  for Embodied Vision-and-Language Interaction Task Learning Agents
LEBP -- Language Expectation & Binding Policy: A Two-Stream Framework for Embodied Vision-and-Language Interaction Task Learning Agents
Hao Liu
Yang Liu
Hong He
Hang Yang
LM&Ro
10
21
0
09 Mar 2022
DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
Xiaofeng Gao
Qiaozi Gao
Ran Gong
Kaixiang Lin
Govind Thattai
Gaurav Sukhatme
LM&Ro
89
70
0
27 Feb 2022
Think Global, Act Local: Dual-scale Graph Transformer for
  Vision-and-Language Navigation
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation
Shizhe Chen
Pierre-Louis Guhur
Makarand Tapaswi
Cordelia Schmid
Ivan Laptev
LM&Ro
28
139
0
23 Feb 2022
One Step at a Time: Long-Horizon Vision-and-Language Navigation with
  Milestones
One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones
Chan Hee Song
Jihyung Kil
Tai-Yu Pan
Brian M. Sadler
Wei-Lun Chao
Yu-Chuan Su
LRM
22
33
0
14 Feb 2022
ASC me to Do Anything: Multi-task Training for Embodied AI
ASC me to Do Anything: Multi-task Training for Embodied AI
Jiasen Lu
Jordi Salvador
Roozbeh Mottaghi
Aniruddha Kembhavi
35
3
0
14 Feb 2022
Learning to Act with Affordance-Aware Multimodal Neural SLAM
Learning to Act with Affordance-Aware Multimodal Neural SLAM
Zhiwei Jia
Kaixiang Lin
Yizhou Zhao
Qiaozi Gao
Govind Thattai
Gaurav Sukhatme
LM&Ro
31
15
0
24 Jan 2022
Video Transformers: A Survey
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
22
103
0
16 Jan 2022
Less is More: Generating Grounded Navigation Instructions from Landmarks
Less is More: Generating Grounded Navigation Instructions from Landmarks
Su Wang
Ceslee Montgomery
Jordi Orbay
Vighnesh Birodkar
Aleksandra Faust
Izzeddin Gur
Natasha Jaques
Austin Waters
Jason Baldridge
Peter Anderson
20
63
0
25 Nov 2021
Multimodal Transformer with Variable-length Memory for
  Vision-and-Language Navigation
Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Chuang Lin
Yi-Xin Jiang
Jianfei Cai
Lizhen Qu
Gholamreza Haffari
Zehuan Yuan
28
32
0
10 Nov 2021
LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
Yizhou Zhao
Kaixiang Lin
Zhiwei Jia
Qiaozi Gao
Govind Thattai
Jesse Thomason
Gaurav Sukhatme
3DV
LM&Ro
19
15
0
10 Nov 2021
History Aware Multimodal Transformer for Vision-and-Language Navigation
History Aware Multimodal Transformer for Vision-and-Language Navigation
Shizhe Chen
Pierre-Louis Guhur
Cordelia Schmid
Ivan Laptev
LM&Ro
20
225
0
25 Oct 2021
FILM: Following Instructions in Language with Modular Methods
FILM: Following Instructions in Language with Modular Methods
So Yeon Min
Devendra Singh Chaplot
Pradeep Ravikumar
Yonatan Bisk
Ruslan Salakhutdinov
LM&Ro
214
159
0
12 Oct 2021
Skill Induction and Planning with Latent Language
Skill Induction and Planning with Latent Language
Pratyusha Sharma
Antonio Torralba
Jacob Andreas
LM&Ro
202
108
0
04 Oct 2021
TEACh: Task-driven Embodied Agents that Chat
TEACh: Task-driven Embodied Agents that Chat
Aishwarya Padmakumar
Jesse Thomason
Ayush Shrivastava
P. Lange
Anjali Narayan-Chen
Spandana Gella
Robinson Piramithu
Gokhan Tur
Dilek Z. Hakkani-Tür
LM&Ro
166
180
0
01 Oct 2021
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual
  Task Completion
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion
Alessandro Suglia
Qiaozi Gao
Jesse Thomason
Govind Thattai
Gaurav Sukhatme
LM&Ro
29
77
0
10 Aug 2021
A Persistent Spatial Semantic Representation for High-level Natural
  Language Instruction Execution
A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution
Valts Blukis
Chris Paxton
D. Fox
Animesh Garg
Yoav Artzi
LM&Ro
212
134
0
12 Jul 2021
Attention Bottlenecks for Multimodal Fusion
Attention Bottlenecks for Multimodal Fusion
Arsha Nagrani
Shan Yang
Anurag Arnab
A. Jansen
Cordelia Schmid
Chen Sun
25
543
0
30 Jun 2021
Multi-modal Transformer for Video Retrieval
Multi-modal Transformer for Video Retrieval
Valentin Gabeur
Chen Sun
Alahari Karteek
Cordelia Schmid
ViT
424
596
0
21 Jul 2020
Speaker-Follower Models for Vision-and-Language Navigation
Speaker-Follower Models for Vision-and-Language Navigation
Daniel Fried
Ronghang Hu
Volkan Cirik
Anna Rohrbach
Jacob Andreas
Louis-Philippe Morency
Taylor Berg-Kirkpatrick
Kate Saenko
Dan Klein
Trevor Darrell
LM&Ro
LRM
260
498
0
07 Jun 2018
Previous
123