37
0

Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports

Abstract

With recent advancements in Large Language Models (LLMs) and growing interest in retrieval-augmented generation (RAG), the ability to understand table structures has become increasingly important. This is especially critical in financial domains such as securities reports, where highly accurate question answering (QA) over tables is required. However, tables exist in various formats-including HTML, images, and plain text-making it difficult to preserve and extract structural information. Therefore, multimodal LLMs are essential for robust and general-purpose table understanding. Despite their promise, current Large Vision-Language Models (LVLMs), which are major representatives of multimodal LLMs, still face challenges in accurately understanding characters and their spatial relationships within documents. In this study, we propose a method to enhance LVLM-based table understanding by incorporating in-table textual content and layout features. Experimental results demonstrate that these auxiliary modalities significantly improve performance, enabling robust interpretation of complex document layouts without relying on explicitly structured input formats.

View on arXiv
@article{aida2025_2505.17625,
  title={ Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports },
  author={ Hayato Aida and Kosuke Takahashi and Takahiro Omi },
  journal={arXiv preprint arXiv:2505.17625},
  year={ 2025 }
}
Main:5 Pages
7 Figures
Bibliography:1 Pages
3 Tables
Comments on this paper