Geospatial code generation is emerging as a key direction in the integration of artificial intelligence and geoscientific analysis. However, there remains a lack of standardized tools for automatic evaluation in this domain. To address this gap, we propose AutoGEEval, the first multimodal, unit-level automated evaluation framework for geospatial code generation tasks on the Google Earth Engine (GEE) platform powered by large language models (LLMs). Built upon the GEE Python API, AutoGEEval establishes a benchmark suite (AutoGEEval-Bench) comprising 1325 test cases that span 26 GEE data types. The framework integrates both question generation and answer verification components to enable an end-to-end automated evaluation pipeline-from function invocation to execution validation. AutoGEEval supports multidimensional quantitative analysis of model outputs in terms of accuracy, resource consumption, execution efficiency, and error types. We evaluate 18 state-of-the-art LLMs-including general-purpose, reasoning-augmented, code-centric, and geoscience-specialized models-revealing their performance characteristics and potential optimization pathways in GEE code generation. This work provides a unified protocol and foundational resource for the development and assessment of geospatial code generation models, advancing the frontier of automated natural language to domain-specific code translation.
View on arXiv@article{hou2025_2505.12900, title={ AutoGEEval: A Multimodal and Automated Framework for Geospatial Code Generation on GEE with Large Language Models }, author={ Shuyang Hou and Zhangxiao Shen and Huayi Wu and Jianyuan Liang and Haoyue Jiao and Yaxian Qing and Xiaopu Zhang and Xu Li and Zhipeng Gui and Xuefeng Guan and Longgang Xiang }, journal={arXiv preprint arXiv:2505.12900}, year={ 2025 } }