Principles of Visual Tokens for Efficient Video Understanding

Video understanding has made huge strides in recent years, relying largely on the power of transformers. As this architecture is notoriously expensive and video data is highly redundant, research into improving efficiency has become particularly relevant. Some creative solutions include token selection and merging. While most methods succeed in reducing the cost of the model and maintaining accuracy, an interesting pattern arises: most methods do not outperform the baseline of randomly discarding tokens. In this paper we take a closer look at this phenomenon and observe 5 principles of the nature of visual tokens. For example, we observe that the value of tokens follows a clear Pareto-distribution where most tokens have remarkably low value, and just a few carry most of the perceptual information. We build on these and further insights to propose a lightweight video model, LITE, that can select a small number of tokens effectively, outperforming state-of-the-art and existing baselines across datasets (Kinetics-400 and Something-Something-V2) in the challenging trade-off of computation (GFLOPs) vs accuracy. Experiments also show that LITE generalizes across datasets and even other tasks without the need for retraining.
View on arXiv@article{hao2025_2411.13626, title={ Principles of Visual Tokens for Efficient Video Understanding }, author={ Xinyue Hao and Gen Li and Shreyank N Gowda and Robert B Fisher and Jonathan Huang and Anurag Arnab and Laura Sevilla-Lara }, journal={arXiv preprint arXiv:2411.13626}, year={ 2025 } }