10
0

VLMInferSlow: Evaluating the Efficiency Robustness of Large Vision-Language Models as a Service

Main:12 Pages
13 Figures
Bibliography:1 Pages
9 Tables
Appendix:3 Pages
Abstract

Vision-Language Models (VLMs) have demonstrated great potential in real-world applications. While existing research primarily focuses on improving their accuracy, the efficiency remains underexplored. Given the real-time demands of many applications and the high inference overhead of VLMs, efficiency robustness is a critical issue. However, previous studies evaluate efficiency robustness under unrealistic assumptions, requiring access to the model architecture and parameters -- an impractical scenario in ML-as-a-service settings, where VLMs are deployed via inference APIs. To address this gap, we propose VLMInferSlow, a novel approach for evaluating VLM efficiency robustness in a realistic black-box setting. VLMInferSlow incorporates fine-grained efficiency modeling tailored to VLM inference and leverages zero-order optimization to search for adversarial examples. Experimental results show that VLMInferSlow generates adversarial images with imperceptible perturbations, increasing the computational cost by up to 128.47%. We hope this research raises the community's awareness about the efficiency robustness of VLMs.

View on arXiv
@article{wang2025_2506.15755,
  title={ VLMInferSlow: Evaluating the Efficiency Robustness of Large Vision-Language Models as a Service },
  author={ Xiasi Wang and Tianliang Yao and Simin Chen and Runqi Wang and Lei YE and Kuofeng Gao and Yi Huang and Yuan Yao },
  journal={arXiv preprint arXiv:2506.15755},
  year={ 2025 }
}
Comments on this paper