39
0

A large-scale image-text dataset benchmark for farmland segmentation

Abstract

The traditional deep learning paradigm that solely relies on labeled data has limitations in representing the spatial relationships between farmland elements and the surroundingthis http URLstruggles to effectively model the dynamic temporal evolution and spatial heterogeneity of farmland. Language,as a structured knowledge carrier,can explicitly express the spatiotemporal characteristics of farmland, such as its shape, distribution,and surrounding environmentalthis http URL,a language-driven learning paradigm can effectively alleviate the challenges posed by the spatiotemporal heterogeneity ofthis http URL,in the field of remote sensing imagery of farmland,there is currently no comprehensive benchmark dataset to support this researchthis http URLfill this gap,we introduced language based descriptions of farmland and developed FarmSeg-VL dataset,the first fine-grained image-text dataset designed for spatiotemporal farmlandthis http URL, this article proposed a semi-automatic annotation method that can accurately assign caption to each image, ensuring high data quality and semantic richness while improving the efficiency of datasetthis http URL,the FarmSeg-VL exhibits significant spatiotemporalthis http URLterms of the temporal dimension,it covers all fourthis http URLterms of the spatial dimension,it covers eight typical agricultural regions acrossthis http URLaddition, in terms of captions,FarmSeg-VL covers rich spatiotemporal characteristics of farmland,including its inherent properties,phenological characteristics, spatial distribution,topographic and geomorphic features,and the distribution of surroundingthis http URL,we present a performance analysis of VLMs and the deep learning models that rely solely on labels trained on the FarmSeg-VL,demonstrating its potential as a standard benchmark for farmland segmentation.

View on arXiv
@article{tao2025_2503.23106,
  title={ A large-scale image-text dataset benchmark for farmland segmentation },
  author={ Chao Tao and Dandan Zhong and Weiliang Mu and Zhuofei Du and Haiyang Wu },
  journal={arXiv preprint arXiv:2503.23106},
  year={ 2025 }
}
Comments on this paper