ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.10034
101
4
v1v2 (latest)

TULIP: Token-length Upgraded CLIP

13 October 2024
Ivona Najdenkoska
Mohammad Mahdi Derakhshani
Yuki M. Asano
Nanne van Noord
Marcel Worring
Cees G. M. Snoek
    VLM
ArXiv (abs)PDFHTML
Abstract

We address the challenge of representing long captions in vision-language models, such as CLIP. By design these models are limited by fixed, absolute positional encodings, restricting inputs to a maximum of 77 tokens and hindering performance on tasks requiring longer descriptions. Although recent work has attempted to overcome this limit, their proposed approaches struggle to model token relationships over longer distances and simply extend to a fixed new token length. Instead, we propose a generalizable method, named TULIP, able to upgrade the token length to any length for CLIP-like models. We do so by improving the architecture with relative position encodings, followed by a training procedure that (i) distills the original CLIP text encoder into an encoder with relative position encodings and (ii) enhances the model for aligning longer captions with images. By effectively encoding captions longer than the default 77 tokens, our model outperforms baselines on cross-modal tasks such as retrieval and text-to-image generation.

View on arXiv
@article{najdenkoska2025_2410.10034,
  title={ TULIP: Token-length Upgraded CLIP },
  author={ Ivona Najdenkoska and Mohammad Mahdi Derakhshani and Yuki M. Asano and Nanne van Noord and Marcel Worring and Cees G. M. Snoek },
  journal={arXiv preprint arXiv:2410.10034},
  year={ 2025 }
}
Comments on this paper