ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18855
82
0

Inference Compute-Optimal Video Vision Language Models

24 May 2025
Peiqi Wang
ShengYun Peng
Xuewen Zhang
Hanchao Yu
Yibo Yang
Lifu Huang
Fujun Liu
Qifan Wang
    VLM
ArXivPDFHTML
Abstract

This work investigates the optimal allocation of inference compute across three key scaling factors in video vision language models: language model size, frame count, and the number of visual tokens per frame. While prior works typically focuses on optimizing model efficiency or improving performance without considering resource constraints, we instead identify optimal model configuration under fixed inference compute budgets. We conduct large-scale training sweeps and careful parametric modeling of task performance to identify the inference compute-optimal frontier. Our experiments reveal how task performance depends on scaling factors and finetuning data size, as well as how changes in data size shift the compute-optimal frontier. These findings translate to practical tips for selecting these scaling factors.

View on arXiv
@article{wang2025_2505.18855,
  title={ Inference Compute-Optimal Video Vision Language Models },
  author={ Peiqi Wang and ShengYun Peng and Xuewen Zhang and Hanchao Yu and Yibo Yang and Lifu Huang and Fujun Liu and Qifan Wang },
  journal={arXiv preprint arXiv:2505.18855},
  year={ 2025 }
}
Comments on this paper