ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03368
112
0
v1v2 (latest)

Geospatial Mechanistic Interpretability of Large Language Models

6 May 2025
Stef De Sabbata
Stefano Mizzaro
Kevin Roitero
    AI4CE
ArXiv (abs)PDFHTML
Main:3 Pages
3 Figures
Appendix:13 Pages
Abstract

Large Language Models (LLMs) have demonstrated unprecedented capabilities across various natural language processing tasks. Their ability to process and generate viable text and code has made them ubiquitous in many fields, while their deployment as knowledge bases and "reasoning" tools remains an area of ongoing research. In geography, a growing body of literature has been focusing on evaluating LLMs' geographical knowledge and their ability to perform spatial reasoning. However, very little is still known about the internal functioning of these models, especially about how they process geographical information.

View on arXiv
@article{sabbata2025_2505.03368,
  title={ Geospatial Mechanistic Interpretability of Large Language Models },
  author={ Stef De Sabbata and Stefano Mizzaro and Kevin Roitero },
  journal={arXiv preprint arXiv:2505.03368},
  year={ 2025 }
}
Comments on this paper