ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.17771
69
0

DiagramQG: Concept-Focused Diagram Question Generation via Hierarchical Knowledge Integration

26 November 2024
X. Zhang
L. Zhang
Yanrui Wu
Muye Huang
Wenjun Wu
Bo Li
Shaowei Wang
Jun Liu
Jun Liu
ArXivPDFHTML
Abstract

Visual Question Generation (VQG) has gained significant attention due to its potential in educational applications. However, VQG research mainly focuses on natural images, largely neglecting diagrams in educational materials used to assess students' conceptual understanding. To address this gap, we construct DiagramQG, a dataset containing 8,372 diagrams and 19,475 questions across various subjects. DiagramQG introduces concept and target text constraints, guiding the model to generate concept-focused questions for educational purposes. Meanwhile, we present the Hierarchical Knowledge Integration framework for Diagram Question Generation (HKI-DQG) as a strong baseline. This framework obtains multi-scale patches of diagrams and acquires knowledge using a visual language model with frozen parameters. It then integrates knowledge, text constraints, and patches to generate concept-focused questions. We evaluate the performance of existing VQG models, open-source and closed-source vision-language models, and HKI-DQG on the DiagramQG dataset. Our novel HKI-DQG consistently outperforms existing methods, demonstrating that it serves as a strong baseline. Furthermore, we apply HKI-DQG to four other VQG datasets of natural images, namely VQG-COCO, K-VQG, OK-VQA, and A-OKVQA, achieving state-of-the-art performance.

View on arXiv
@article{zhang2025_2411.17771,
  title={ DiagramQG: Concept-Focused Diagram Question Generation via Hierarchical Knowledge Integration },
  author={ Xinyu Zhang and Lingling Zhang and Yanrui Wu and Muye Huang and Wenjun Wu and Bo Li and Shaowei Wang and Basura Fernando and Jun Liu },
  journal={arXiv preprint arXiv:2411.17771},
  year={ 2025 }
}
Comments on this paper