17
0

DisTime: Distribution-based Time Representation for Video Large Language Models

Main:8 Pages
13 Figures
Bibliography:3 Pages
13 Tables
Appendix:6 Pages
Abstract

Despite advances in general video understanding, Video Large Language Models (Video-LLMs) face challenges in precise temporal localization due to discrete time representations and limited temporally aware datasets. Existing methods for temporal expression either conflate time with text-based numerical values, add a series of dedicated temporal tokens, or regress time using specialized temporal grounding heads. To address these issues, we introduce DisTime, a lightweight framework designed to enhance temporal comprehension in Video-LLMs. DisTime employs a learnable token to create a continuous temporal embedding space and incorporates a Distribution-based Time Decoder that generates temporal probability distributions, effectively mitigating boundary ambiguities and maintaining temporal continuity. Additionally, the Distribution-based Time Encoder re-encodes timestamps to provide time markers for Video-LLMs. To overcome temporal granularity limitations in existing datasets, we propose an automated annotation paradigm that combines the captioning capabilities of Video-LLMs with the localization expertise of dedicated temporal models. This leads to the creation of InternVid-TG, a substantial dataset with 1.25M temporally grounded events across 179k videos, surpassing ActivityNet-Caption by 55 times. Extensive experiments demonstrate that DisTime achieves state-of-the-art performance across benchmarks in three time-sensitive tasks while maintaining competitive performance in Video QA tasks. Code and data are released atthis https URL.

View on arXiv
@article{zeng2025_2505.24329,
  title={ DisTime: Distribution-based Time Representation for Video Large Language Models },
  author={ Yingsen Zeng and Zepeng Huang and Yujie Zhong and Chengjian Feng and Jie Hu and Lin Ma and Yang Liu },
  journal={arXiv preprint arXiv:2505.24329},
  year={ 2025 }
}
Comments on this paper