2
0

Hyperspectral Image Land Cover Captioning Dataset for Vision Language Models

Abstract

We introduce HyperCap, the first large-scale hyperspectral captioning dataset designed to enhance model performance and effectiveness in remote sensing applications. Unlike traditional hyperspectral imaging (HSI) datasets that focus solely on classification tasks, HyperCap integrates spectral data with pixel-wise textual annotations, enabling deeper semantic understanding of hyperspectral imagery. This dataset enhances model performance in tasks like classification and feature extraction, providing a valuable resource for advanced remote sensing applications. HyperCap is constructed from four benchmark datasets and annotated through a hybrid approach combining automated and manual methods to ensure accuracy and consistency. Empirical evaluations using state-of-the-art encoders and diverse fusion techniques demonstrate significant improvements in classification performance. These results underscore the potential of vision-language learning in HSI and position HyperCap as a foundational dataset for future research in the field.

View on arXiv
@article{das2025_2505.12217,
  title={ Hyperspectral Image Land Cover Captioning Dataset for Vision Language Models },
  author={ Aryan Das and Tanishq Rachamalla and Pravendra Singh and Koushik Biswas and Vinay Kumar Verma and Swalpa Kumar Roy },
  journal={arXiv preprint arXiv:2505.12217},
  year={ 2025 }
}
Comments on this paper