ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.16623
17
0

History-Augmented Vision-Language Models for Frontier-Based Zero-Shot Object Navigation

19 June 2025
Mobin Habibpour
Fatemeh Afghah
    LM&Ro
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

Object Goal Navigation (ObjectNav) challenges robots to find objects in unseen environments, demanding sophisticated reasoning. While Vision-Language Models (VLMs) show potential, current ObjectNav methods often employ them superficially, primarily using vision-language embeddings for object-scene similarity checks rather than leveraging deeper reasoning. This limits contextual understanding and leads to practical issues like repetitive navigation behaviors. This paper introduces a novel zero-shot ObjectNav framework that pioneers the use of dynamic, history-aware prompting to more deeply integrate VLM reasoning into frontier-based exploration. Our core innovation lies in providing the VLM with action history context, enabling it to generate semantic guidance scores for navigation actions while actively avoiding decision loops. We also introduce a VLM-assisted waypoint generation mechanism for refining the final approach to detected objects. Evaluated on the HM3D dataset within Habitat, our approach achieves a 46% Success Rate (SR) and 24.8% Success weighted by Path Length (SPL). These results are comparable to state-of-the-art zero-shot methods, demonstrating the significant potential of our history-augmented VLM prompting strategy for more robust and context-aware robotic navigation.

View on arXiv
@article{habibpour2025_2506.16623,
  title={ History-Augmented Vision-Language Models for Frontier-Based Zero-Shot Object Navigation },
  author={ Mobin Habibpour and Fatemeh Afghah },
  journal={arXiv preprint arXiv:2506.16623},
  year={ 2025 }
}
Comments on this paper