ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02556
70
0

Sign Language: Towards Sign Understanding for Robot Autonomy

3 June 2025
Ayush Agrawal
Joel Loo
Nicky Zimmerman
David Hsu
    SLR
ArXiv (abs)PDFHTML
Main:8 Pages
8 Figures
Bibliography:2 Pages
5 Tables
Abstract

Signage is an ubiquitous element of human environments, playing a critical role in both scene understanding and navigation. For autonomous systems to fully interpret human environments, effectively parsing and understanding signs is essential. We introduce the task of navigational sign understanding, aimed at extracting navigational cues from signs that convey symbolic spatial information about the scene. Specifically, we focus on signs capturing directional cues that point toward distant locations and locational cues that identify specific places. To benchmark performance on this task, we curate a comprehensive test set, propose appropriate evaluation metrics, and establish a baseline approach. Our test set consists of over 160 images, capturing signs with varying complexity and design across a wide range of public spaces, such as hospitals, shopping malls, and transportation hubs. Our baseline approach harnesses Vision-Language Models (VLMs) to parse navigational signs under these high degrees of variability. Experiments show that VLMs offer promising performance on this task, potentially motivating downstream applications in robotics. The code and dataset are available on Github.

View on arXiv
@article{agrawal2025_2506.02556,
  title={ Sign Language: Towards Sign Understanding for Robot Autonomy },
  author={ Ayush Agrawal and Joel Loo and Nicky Zimmerman and David Hsu },
  journal={arXiv preprint arXiv:2506.02556},
  year={ 2025 }
}
Comments on this paper