ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.10112
80
0

Can We Predict Performance of Large Models across Vision-Language Tasks?

14 October 2024
Qinyu Zhao
Ming Xu
Kartik Gupta
Akshay Asthana
Liang Zheng
Stephen Gould
ArXivPDFHTML
Abstract

Evaluating large vision-language models (LVLMs) is very expensive, due to high computational cost and the wide variety of tasks. The good news is that if we already have some observed performance scores, we may be able to infer unknown ones. In this study, we propose a new framework for predicting unknown performance scores based on observed ones from other LVLMs or tasks. We first formulate the performance prediction as a matrix completion task. Specifically, we construct a sparse performance matrix R\boldsymbol{R}R, where each entry RmnR_{mn}Rmn​ represents the performance score of the mmm-th model on the nnn-th dataset. By applying probabilistic matrix factorization (PMF) with Markov chain Monte Carlo (MCMC), we can complete the performance matrix, i.e., predict unknown scores. Additionally, we estimate the uncertainty of performance prediction based on MCMC. Practitioners can evaluate their models on untested tasks with higher uncertainty first, which quickly reduces the prediction errors. We further introduce several improvements to enhance PMF for scenarios with sparse observed performance scores. Our experiments demonstrate the accuracy of PMF in predicting unknown scores, the reliability of uncertainty estimates in ordering evaluations, and the effectiveness of our enhancements for handling sparse data.

View on arXiv
@article{zhao2025_2410.10112,
  title={ Can We Predict Performance of Large Models across Vision-Language Tasks? },
  author={ Qinyu Zhao and Ming Xu and Kartik Gupta and Akshay Asthana and Liang Zheng and Stephen Gould },
  journal={arXiv preprint arXiv:2410.10112},
  year={ 2025 }
}
Comments on this paper