Narrate2Nav: Real-Time Visual Navigation with Implicit Language Reasoning in Human-Centric Environments
- LM&Ro

Large Vision-Language Models (VLMs) have demonstrated potential in enhancing mobile robot navigation in human-centric environments by understanding contextual cues, human intentions, and social dynamics while exhibiting reasoning capabilities. However, their computational complexity and limited sensitivity to continuous numerical data impede real-time performance and precise motion control. To this end, we propose Narrate2Nav, a novel real-time vision-action model that leverages a novel self-supervised learning framework based on the Barlow Twins redundancy reduction loss to embed implicit natural language reasoning, social cues, and human intentions within a visual encoder-enabling reasoning in the model's latent space rather than token space. The model combines RGB inputs, motion commands, and textual signals of scene context during training to bridge from robot observations to low-level motion commands for short-horizon point-goal navigation during deployment. Extensive evaluation of Narrate2Nav across various challenging scenarios in both offline unseen dataset and real-world experiments demonstrates an overall improvement of 52.94 percent and 41.67 percent, respectively, over the next best baseline. Additionally, qualitative comparative analysis of Narrate2Nav's visual encoder attention map against four other baselines demonstrates enhanced attention to navigation-critical scene elements, underscoring its effectiveness in human-centric navigation tasks.
View on arXiv@article{payandeh2025_2506.14233, title={ Narrate2Nav: Real-Time Visual Navigation with Implicit Language Reasoning in Human-Centric Environments }, author={ Amirreza Payandeh and Anuj Pokhrel and Daeun Song and Marcos Zampieri and Xuesu Xiao }, journal={arXiv preprint arXiv:2506.14233}, year={ 2025 } }