Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2503.16492
Cited By
FAM-HRI: Foundation-Model Assisted Multi-Modal Human-Robot Interaction Combining Gaze and Speech
11 March 2025
Yuzhi Lai
Shenghai Yuan
Boya Zhang
Benjamin Kiefer
Peizheng Li
Andreas Zell
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"FAM-HRI: Foundation-Model Assisted Multi-Modal Human-Robot Interaction Combining Gaze and Speech"
6 / 6 papers shown
Title
NVP-HRI: Zero Shot Natural Voice and Posture-based Human-Robot Interaction via Large Language Model
Yuzhi Lai
Shenghai Yuan
Youssef Nassar
Mingyu Fan
T. Weber
Matthias Rätsch
LM&Ro
99
3
0
12 Mar 2025
LaMI: Large Language Models for Multi-Modal Human-Robot Interaction
Chao Wang
Stephan Hasler
Daniel Tanneberg
Felix Ocker
Frank Joublin
Antonello Ceravola
Joerg Deigmoeller
Michael Gienger
LM&Ro
81
30
0
26 Jan 2024
Project Aria: A New Tool for Egocentric Multi-Modal AI Research
Jakob Engel
Kiran Somasundaram
Michael Goesele
Albert Sun
Alexander Gamino
...
Zijian Wang
Mingfei Yan
Carl Ren
R. D. Nardi
Richard Newcombe
EgoV
120
99
0
24 Aug 2023
Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
Shilong Liu
Zhaoyang Zeng
Tianhe Ren
Feng Li
Hao Zhang
...
Chun-yue Li
Jianwei Yang
Hang Su
Jun Zhu
Lei Zhang
ObjD
189
2,015
0
09 Mar 2023
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Ishika Singh
Valts Blukis
Arsalan Mousavian
Ankit Goyal
Danfei Xu
Jonathan Tremblay
Dieter Fox
Jesse Thomason
Animesh Garg
LM&Ro
LLMAG
173
655
0
22 Sep 2022
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
C. Campos
Richard Elvira
J. Rodríguez
José M.M. Montiel
Juan D. Tardós
86
2,886
0
23 Jul 2020
1