43
1

LongProLIP: A Probabilistic Vision-Language Model with Long Context Text

Abstract

Recently, Probabilistic Language-Image Pre-Training (ProLIP) has been proposed to tackle the multiplicity issue of vision-language (VL) tasks. Despite their success in probabilistic representation learning at a scale, the ProLIP models cannot handle long context texts longer than 64 context length, which limits their ability to capture rich contextual information from longer text sequences. To address this issue, this paper proposes a fine-tuning strategy for ProLIP to accept longer texts, e.g., 256 text tokens. Experimental results on Urban-1k and the DataComp evaluation suite show that the proposed LongProLIP recipe can improve understanding of long contexts while minimizing the negative effect ofthis http URLalso observe a trade-off between the long context understanding (measured by Urban-1k) and general zero-shot capability (measured by evaluation datasets by DataComp). Code is available atthis https URL

View on arXiv
@article{chun2025_2503.08048,
  title={ LongProLIP: A Probabilistic Vision-Language Model with Long Context Text },
  author={ Sanghyuk Chun and Sangdoo Yun },
  journal={arXiv preprint arXiv:2503.08048},
  year={ 2025 }
}
Comments on this paper