ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01293
58
0

Abstractive Visual Understanding of Multi-modal Structured Knowledge: A New Perspective for MLLM Evaluation

2 June 2025
Yichi Zhang
Zhuo Chen
Lingbing Guo
Yajing Xu
M. Zhang
Wen Zhang
H. Chen
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:2 Pages
1 Tables
Abstract

Multi-modal large language models (MLLMs) incorporate heterogeneous modalities into LLMs, enabling a comprehensive understanding of diverse scenarios and objects. Despite the proliferation of evaluation benchmarks and leaderboards for MLLMs, they predominantly overlook the critical capacity of MLLMs to comprehend world knowledge with structured abstractions that appear in visual form. To address this gap, we propose a novel evaluation paradigm and devise M3STR, an innovative benchmark grounded in the Multi-Modal Map for STRuctured understanding. This benchmark leverages multi-modal knowledge graphs to synthesize images encapsulating subgraph architectures enriched with multi-modal entities. M3STR necessitates that MLLMs not only recognize the multi-modal entities within the visual inputs but also decipher intricate relational topologies among them. We delineate the benchmark's statistical profiles and automated construction pipeline, accompanied by an extensive empirical analysis of 26 state-of-the-art MLLMs. Our findings reveal persistent deficiencies in processing abstractive visual information with structured knowledge, thereby charting a pivotal trajectory for advancing MLLMs' holistic reasoning capacities. Our code and data are released atthis https URL

View on arXiv
@article{zhang2025_2506.01293,
  title={ Abstractive Visual Understanding of Multi-modal Structured Knowledge: A New Perspective for MLLM Evaluation },
  author={ Yichi Zhang and Zhuo Chen and Lingbing Guo and Yajing Xu and Min Zhang and Wen Zhang and Huajun Chen },
  journal={arXiv preprint arXiv:2506.01293},
  year={ 2025 }
}
Comments on this paper