ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16548
39
0

SemanticScanpath: Combining Gaze and Speech for Situated Human-Robot Interaction Using LLMs

19 March 2025
Elisabeth Menendez
Michael Gienger
Santiago Martínez
Carlos Balaguer
Anna Belardinelli
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have substantially improved the conversational capabilities of social robots. Nevertheless, for an intuitive and fluent human-robot interaction, robots should be able to ground the conversation by relating ambiguous or underspecified spoken utterances to the current physical situation and to the intents expressed non verbally by the user, for example by using referential gaze. Here we propose a representation integrating speech and gaze to enable LLMs to obtain higher situated awareness and correctly resolve ambiguous requests. Our approach relies on a text-based semantic translation of the scanpath produced by the user along with the verbal requests and demonstrates LLM's capabilities to reason about gaze behavior, robustly ignoring spurious glances or irrelevant objects. We validate the system across multiple tasks and two scenarios, showing its generality and accuracy, and demonstrate its implementation on a robotic platform, closing the loop from request interpretation to execution.

View on arXiv
@article{menendez2025_2503.16548,
  title={ SemanticScanpath: Combining Gaze and Speech for Situated Human-Robot Interaction Using LLMs },
  author={ Elisabeth Menendez and Michael Gienger and Santiago Martínez and Carlos Balaguer and Anna Belardinelli },
  journal={arXiv preprint arXiv:2503.16548},
  year={ 2025 }
}
Comments on this paper