8
0

LocationReasoner: Evaluating LLMs on Real-World Site Selection Reasoning

Main:9 Pages
4 Figures
Bibliography:3 Pages
6 Tables
Appendix:2 Pages
Abstract

Recent advances in large language models (LLMs), particularly those enhanced through reinforced post-training, have demonstrated impressive reasoning capabilities, as exemplified by models such as OpenAI o1 and DeepSeek-R1. However, these capabilities are predominantly benchmarked on domains like mathematical problem solving and code generation -- leaving open the question of whether such reasoning skills generalize to complex, real-world scenarios. In this paper, we introduce LocationReasoner, a benchmark designed to evaluate LLMs' reasoning abilities in the context of real-world site selection, where models must identify feasible locations by reasoning over diverse and complicated spatial, environmental, and logistical constraints. The benchmark comprises over 300 carefully crafted queries of varying difficulty levels, supported by a sandbox environment with in-house tools for constraint-based location search. Extensive evaluations reveal that state-of-the-art reasoning models offer limited improvement over their non-reasoning predecessors in real-world contexts, with even the latest OpenAI o4 model failing on 30% of site selection tasks. Moreover, agentic strategies such as ReAct and Reflexion often suffer from over-reasoning, leading to worse outcomes than direct code-generation prompting. With key limitations of LLMs in holistic and non-linear reasoning highlighted, we release LocationReasoner to foster the development of LLMs and agents capable of robust, grounded reasoning in real-world decision-making tasks. Codes and data for our benchmark are available atthis https URL.

View on arXiv
@article{koda2025_2506.13841,
  title={ LocationReasoner: Evaluating LLMs on Real-World Site Selection Reasoning },
  author={ Miho Koda and Yu Zheng and Ruixian Ma and Mingyang Sun and Devesh Pansare and Fabio Duarte and Paolo Santi },
  journal={arXiv preprint arXiv:2506.13841},
  year={ 2025 }
}
Comments on this paper