ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19944
14
0

Can Visual Encoder Learn to See Arrows?

26 May 2025
Naoyuki Terashita
Yusuke Tozaki
Hideaki Omote
Congkha Nguyen
Ryosuke Nakamoto
Yuta Koreeda
Hiroaki Ozaki
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:2 Pages
2 Tables
Abstract

The diagram is a visual representation of a relationship illustrated with edges (lines or arrows), which is widely used in industrial and scientific communication. Although recognizing diagrams is essential for vision language models (VLMs) to comprehend domain-specific knowledge, recent studies reveal that many VLMs fail to identify edges in images. We hypothesize that these failures stem from an over-reliance on textual and positional biases, preventing VLMs from learning explicit edge features. Based on this idea, we empirically investigate whether the image encoder in VLMs can learn edge representation through training on a diagram dataset in which edges are biased neither by textual nor positional information. To this end, we conduct contrastive learning on an artificially generated diagram--caption dataset to train an image encoder and evaluate its diagram-related features on three tasks: probing, image retrieval, and captioning. Our results show that the finetuned model outperforms pretrained CLIP in all tasks and surpasses zero-shot GPT-4o and LLaVA-Mistral in the captioning task. These findings confirm that eliminating textual and positional biases fosters accurate edge recognition in VLMs, offering a promising path for advancing diagram understanding.

View on arXiv
@article{terashita2025_2505.19944,
  title={ Can Visual Encoder Learn to See Arrows? },
  author={ Naoyuki Terashita and Yusuke Tozaki and Hideaki Omote and Congkha Nguyen and Ryosuke Nakamoto and Yuta Koreeda and Hiroaki Ozaki },
  journal={arXiv preprint arXiv:2505.19944},
  year={ 2025 }
}
Comments on this paper