7
0

PlanGPT-VL: Enhancing Urban Planning with Domain-Specific Vision-Language Models

Abstract

In the field of urban planning, existing Vision-Language Models (VLMs) frequently fail to effectively analyze and evaluate planning maps, despite the critical importance of these visual elements for urban planners and related educational contexts. Planning maps, which visualize land use, infrastructure layouts, and functional zoning, require specialized understanding of spatial configurations, regulatory requirements, and multi-scale analysis. To address this challenge, we introduce PlanGPT-VL, the first domain-specific Vision-Language Model tailored specifically for urban planning maps. PlanGPT-VL employs three innovative approaches: (1) PlanAnno-V framework for high-quality VQA data synthesis, (2) Critical Point Thinking to reduce hallucinations through structured verification, and (3) comprehensive training methodology combining Supervised Fine-Tuning with frozen vision encoder parameters. Through systematic evaluation on our proposed PlanBench-V benchmark, we demonstrate that PlanGPT-VL significantly outperforms general-purpose state-of-the-art VLMs in specialized planning map interpretation tasks, offering urban planning professionals a reliable tool for map analysis, assessment, and educational applications while maintaining high factual accuracy. Our lightweight 7B parameter model achieves comparable performance to models exceeding 72B parameters, demonstrating efficient domain specialization without sacrificing performance.

View on arXiv
@article{zhu2025_2505.14481,
  title={ PlanGPT-VL: Enhancing Urban Planning with Domain-Specific Vision-Language Models },
  author={ He Zhu and Junyou Su and Minxin Chen and Wen Wang and Yijie Deng and Guanhua Chen and Wenjia Zhang },
  journal={arXiv preprint arXiv:2505.14481},
  year={ 2025 }
}
Comments on this paper