Large Language Models (LLMs) have demonstrated unprecedented capabilities across various natural language processing tasks. Their ability to process and generate viable text and code has made them ubiquitous in many fields, while their deployment as knowledge bases and "reasoning" tools remains an area of ongoing research. In geography, a growing body of literature has been focusing on evaluating LLMs' geographical knowledge and their ability to perform spatial reasoning. However, very little is still known about the internal functioning of these models, especially about how they process geographical information.
View on arXiv@article{sabbata2025_2505.03368, title={ Geospatial Mechanistic Interpretability of Large Language Models }, author={ Stef De Sabbata and Stefano Mizzaro and Kevin Roitero }, journal={arXiv preprint arXiv:2505.03368}, year={ 2025 } }