ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23952
22
0

Leveraging Auxiliary Information in Text-to-Video Retrieval: A Review

29 May 2025
A. Fragomeni
Dima Damen
Michael Wray
ArXiv (abs)PDFHTML
Main:22 Pages
7 Figures
Bibliography:6 Pages
12 Tables
Appendix:5 Pages
Abstract

Text-to-Video (T2V) retrieval aims to identify the most relevant item from a gallery of videos based on a user's text query. Traditional methods rely solely on aligning video and text modalities to compute the similarity and retrieve relevant items. However, recent advancements emphasise incorporating auxiliary information extracted from video and text modalities to improve retrieval performance and bridge the semantic gap between these modalities. Auxiliary information can include visual attributes, such as objects; temporal and spatial context; and textual descriptions, such as speech and rephrased captions. This survey comprehensively reviews 81 research papers on Text-to-Video retrieval that utilise such auxiliary information. It provides a detailed analysis of their methodologies; highlights state-of-the-art results on benchmark datasets; and discusses available datasets and their auxiliary information. Additionally, it proposes promising directions for future research, focusing on different ways to further enhance retrieval performance using this information.

View on arXiv
@article{fragomeni2025_2505.23952,
  title={ Leveraging Auxiliary Information in Text-to-Video Retrieval: A Review },
  author={ Adriano Fragomeni and Dima Damen and Michael Wray },
  journal={arXiv preprint arXiv:2505.23952},
  year={ 2025 }
}
Comments on this paper