ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20021
36
3

Decomposing Complex Visual Comprehension into Atomic Visual Skills for Vision Language Models

26 May 2025
Hyunsik Chae
Seungwoo Yoon
J. Park
Chloe Yewon Chun
Yongin Cho
Mu Cai
Yong Jae Lee
Ernest K. Ryu
    CoGeVLM
ArXiv (abs)PDFHTML
Main:9 Pages
17 Figures
Bibliography:5 Pages
16 Tables
Appendix:55 Pages
Abstract

Recent Vision-Language Models (VLMs) have demonstrated impressive multimodal comprehension and reasoning capabilities, yet they often struggle with trivially simple visual tasks. In this work, we focus on the domain of basic 2D Euclidean geometry and systematically categorize the fundamental, indivisible visual perception skills, which we refer to as atomic visual skills. We then introduce the Atomic Visual Skills Dataset (AVSD) for evaluating VLMs on the atomic visual skills. Using AVSD, we benchmark state-of-the-art VLMs and find that they struggle with these tasks, despite being trivial for adult humans. Our findings highlight the need for purpose-built datasets to train and evaluate VLMs on atomic, rather than composite, visual perception tasks.

View on arXiv
@article{chae2025_2505.20021,
  title={ Decomposing Complex Visual Comprehension into Atomic Visual Skills for Vision Language Models },
  author={ Hyunsik Chae and Seungwoo Yoon and Jaden Park and Chloe Yewon Chun and Yongin Cho and Mu Cai and Yong Jae Lee and Ernest K. Ryu },
  journal={arXiv preprint arXiv:2505.20021},
  year={ 2025 }
}
Comments on this paper