563

A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models

Main:9 Pages
9 Figures
Bibliography:3 Pages
7 Tables
Appendix:6 Pages
Abstract

Large Vision-Language Models (LVLMs), despite their recent success, are hardly comprehensively tested for their cognitive abilities. Inspired by the prevalent use of the "Cookie Theft" task in human cognition test, we propose a novel evaluation benchmark to evaluate high-level cognitive ability of LVLMs using images with rich semantics. It defines eight reasoning capabilities and consists of an image description task and a visual question answering task. Our evaluation on well-known LVLMs shows that there is still a large gap in cognitive ability between LVLMs and humans.

View on arXiv
Comments on this paper