19
0

Evaluating Cell Type Inference in Vision Language Models Under Varying Visual Context

Main:3 Pages
1 Figures
2 Tables
Appendix:2 Pages
Abstract

Vision-Language Models (VLMs) have rapidly advanced alongside Large Language Models (LLMs). This study evaluates the capabilities of prominent generative VLMs, such as GPT-4.1 and Gemini 2.5 Pro, accessed via APIs, for histopathology image classification tasks, including cell typing. Using diverse datasets from public and private sources, we apply zero-shot and one-shot prompting methods to assess VLM performance, comparing them against custom-trained Convolutional Neural Networks (CNNs). Our findings demonstrate that while one-shot prompting significantly improves VLM performance over zero-shot (p1.005×105p \approx 1.005 \times 10^{-5} based on Kappa scores), these general-purpose VLMs currently underperform supervised CNNs on most tasks. This work underscores both the promise and limitations of applying current VLMs to specialized domains like pathology via in-context learning. All code and instructions for reproducing the study can be accessed from the repositorythis https URL.

View on arXiv
@article{singhal2025_2506.12683,
  title={ Evaluating Cell Type Inference in Vision Language Models Under Varying Visual Context },
  author={ Samarth Singhal and Sandeep Singhal },
  journal={arXiv preprint arXiv:2506.12683},
  year={ 2025 }
}
Comments on this paper