ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20874
52
0

Can LLMs Learn to Map the World from Local Descriptions?

27 May 2025
Sirui Xia
Aili Chen
Xintao Wang
Tinghui Zhu
Yikai Zhang
Jiangjie Chen
Yanghua Xiao
ArXiv (abs)PDFHTML
Main:8 Pages
11 Figures
Bibliography:2 Pages
25 Tables
Appendix:9 Pages
Abstract

Recent advances in Large Language Models (LLMs) have demonstrated strong capabilities in tasks such as code and mathematics. However, their potential to internalize structured spatial knowledge remains underexplored. This study investigates whether LLMs, grounded in locally relative human observations, can construct coherent global spatial cognition by integrating fragmented relational descriptions. We focus on two core aspects of spatial cognition: spatial perception, where models infer consistent global layouts from local positional relationships, and spatial navigation, where models learn road connectivity from trajectory data and plan optimal paths between unconnected locations. Experiments conducted in a simulated urban environment demonstrate that LLMs not only generalize to unseen spatial relationships between points of interest (POIs) but also exhibit latent representations aligned with real-world spatial distributions. Furthermore, LLMs can learn road connectivity from trajectory descriptions, enabling accurate path planning and dynamic spatial awareness during navigation.

View on arXiv
@article{xia2025_2505.20874,
  title={ Can LLMs Learn to Map the World from Local Descriptions? },
  author={ Sirui Xia and Aili Chen and Xintao Wang and Tinghui Zhu and Yikai Zhang and Jiangjie Chen and Yanghua Xiao },
  journal={arXiv preprint arXiv:2505.20874},
  year={ 2025 }
}
Comments on this paper