Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.08213
Cited By
GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality
12 April 2024
Jaewook Lee
Jun Wang
Elizabeth Brown
Liam Chu
Sebastian S. Rodriguez
Jon E. Froehlich
Re-assign community
ArXiv
PDF
HTML
Papers citing
"GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality"
8 / 8 papers shown
Title
Grounding Task Assistance with Multimodal Cues from a Single Demonstration
Gabriel Sarch
Balasaravanan Thoravi Kumaravel
Sahithya Ravi
Vibhav Vineet
A. D. Wilson
152
0
0
02 May 2025
SemanticScanpath: Combining Gaze and Speech for Situated Human-Robot Interaction Using LLMs
Elisabeth Menendez
Michael Gienger
Santiago Martínez
Carlos Balaguer
Anna Belardinelli
39
0
0
19 Mar 2025
LION-FS: Fast & Slow Video-Language Thinker as Online Video Assistant
Wei Li
Bing Hu
Rui Shao
Leyang Shen
Liqiang Nie
41
2
0
05 Mar 2025
OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering
Jiahao Nick Li
Zhuohao Jerry Zhang
Zhang
51
1
0
24 Feb 2025
Cross-Format Retrieval-Augmented Generation in XR with LLMs for Context-Aware Maintenance Assistance
Á. Nagy
Yannis Spyridis
Vasileios Argyriou
RALM
48
0
0
24 Feb 2025
Everyday AR through AI-in-the-Loop
R. Suzuki
Mar González-Franco
Misha Sra
David Lindlbauer
70
2
0
17 Dec 2024
Analyzing Multimodal Interaction Strategies for LLM-Assisted Manipulation of 3D Scenes
Junlong Chen
Jens Grubert
Per Ola Kristensson
33
1
0
29 Oct 2024
"Ghost of the past": identifying and resolving privacy leakage from LLM's memory through proactive user interaction
Shuning Zhang
Lyumanshan Ye
Xin Yi
Jingyu Tang
Bo Shui
Haobin Xing
Pengfei Liu
Hewu Li
37
4
0
19 Oct 2024
1