ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.00742
64
8

PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph

30 June 2024
Dazhou Yu
Yuntong Hu
Yun-Qing Li
Liang Zhao
ArXivPDFHTML
Abstract

Polygon representation learning is essential for diverse applications, encompassing tasks such as shape coding, building pattern classification, and geographic question answering. While recent years have seen considerable advancements in this field, much of the focus has been on single polygons, overlooking the intricate inner- and inter-polygonal relationships inherent in multipolygons. To address this gap, our study introduces a comprehensive framework specifically designed for learning representations of polygonal geometries, particularly multipolygons. Central to our approach is the incorporation of a heterogeneous visibility graph, which seamlessly integrates both inner- and inter-polygonal relationships. To enhance computational efficiency and minimize graph redundancy, we implement a heterogeneous spanning tree sampling method. Additionally, we devise a rotation-translation invariant geometric representation, ensuring broader applicability across diverse scenarios. Finally, we introduce Multipolygon-GNN, a novel model tailored to leverage the spatial and semantic heterogeneity inherent in the visibility graph. Experiments on five real-world and synthetic datasets demonstrate its ability to capture informative representations for polygonal geometries. Code and data are available at \href{this https URL}{this http URL}.

View on arXiv
@article{yu2025_2407.00742,
  title={ PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph },
  author={ Dazhou Yu and Yuntong Hu and Yun Li and Liang Zhao },
  journal={arXiv preprint arXiv:2407.00742},
  year={ 2025 }
}
Comments on this paper