53
0

CoMP: Continual Multimodal Pre-training for Vision Foundation Models

Abstract

Pre-trained Vision Foundation Models (VFMs) provide strong visual representations for a wide range of applications. In this paper, we continually pre-train prevailing VFMs in a multimodal manner such that they can effortlessly process visual inputs of varying sizes and produce visual representations that are more aligned with language representations, regardless of their original pre-training process. To this end, we introduce CoMP, a carefully designed multimodal pre-training pipeline. CoMP uses a Continual Rotary Position Embedding to accommodate visual inputs with different resolutions, and an Alignment Loss between visual and textual features for better cross-modal alignment. After continual pre-training, leading VFMs like DINOv2, SigLIP and AIMv2 achieve remarkable improvements not only in multimodal understanding tasks but also in generic classification and segmentation tasks. Remarkably, CoMP-AIMv2 achieves scores of 64.9 on ChartQA with a 0.5B LLM, while maintaining an 87.3% accuracy on ImageNet-1K and a 51.8 mIoU on ADE20K under frozen chunk evaluation.

View on arXiv
@article{chen2025_2503.18931,
  title={ CoMP: Continual Multimodal Pre-training for Vision Foundation Models },
  author={ Yitong Chen and Lingchen Meng and Wujian Peng and Zuxuan Wu and Yu-Gang Jiang },
  journal={arXiv preprint arXiv:2503.18931},
  year={ 2025 }
}
Comments on this paper