By mapping sites at large scales using remotely sensed data, archaeologists can generate unique insights into long-term demographic trends, inter-regional social networks, and past adaptations to climate change. Remote sensing surveys complement field-based approaches, and their reach can be especially great when combined with deep learning and computer vision techniques. However, conventional supervised deep learning methods face challenges in annotating fine-grained archaeological features at scale. While recent vision foundation models have shown remarkable success in learning large-scale remote sensing data with minimal annotations, most off-the-shelf solutions are designed for RGB images rather than multi-spectral satellite imagery, such as the 8-band data used in our study. In this paper, we introduce DeepAndes, a transformer-based vision foundation model trained on three million multi-spectral satellite images, specifically tailored for Andean archaeology. DeepAndes incorporates a customized DINOv2 self-supervised learning algorithm optimized for 8-band multi-spectral imagery, marking the first foundation model designed explicitly for the Andes region. We evaluate its image understanding performance through imbalanced image classification, image instance retrieval, and pixel-level semantic segmentation tasks. Our experiments show that DeepAndes achieves superior F1 scores, mean average precision, and Dice scores in few-shot learning scenarios, significantly outperforming models trained from scratch or pre-trained on smaller datasets. This underscores the effectiveness of large-scale self-supervised pre-training in archaeological remote sensing. Codes will be available onthis https URL.
View on arXiv@article{guo2025_2504.20303, title={ DeepAndes: A Self-Supervised Vision Foundation Model for Multi-Spectral Remote Sensing Imagery of the Andes }, author={ Junlin Guo and James R. Zimmer-Dauphinee and Jordan M. Nieusma and Siqi Lu and Quan Liu and Ruining Deng and Can Cui and Jialin Yue and Yizhe Lin and Tianyuan Yao and Juming Xiong and Junchao Zhu and Chongyu Qu and Yuechen Yang and Mitchell Wilkes and Xiao Wang and Parker VanValkenburgh and Steven A. Wernke and Yuankai Huo }, journal={arXiv preprint arXiv:2504.20303}, year={ 2025 } }