SCAN: Semantic Document Layout Analysis for Textual and Visual Retrieval-Augmented Generation

With the increasing adoption of Large Language Models (LLMs) and Vision-Language Models (VLMs), rich document analysis technologies for applications like Retrieval-Augmented Generation (RAG) and visual RAG are gaining significant attention. Recent research indicates that using VLMs can achieve better RAG performance, but processing rich documents still remains a challenge since a single page contains large amounts of information. In this paper, we present SCAN (\textbf{S}emanti\textbf{C} Document Layout \textbf{AN}alysis), a novel approach enhancing both textual and visual Retrieval-Augmented Generation (RAG) systems working with visually rich documents. It is a VLM-friendly approach that identifies document components with appropriate semantic granularity, balancing context preservation with processing efficiency. SCAN uses a coarse-grained semantic approach that divides documents into coherent regions covering continuous components. We trained the SCAN model by fine-tuning object detection models with sophisticated annotation datasets. Our experimental results across English and Japanese datasets demonstrate that applying SCAN improves end-to-end textual RAG performance by up to 9.0\% and visual RAG performance by up to 6.4\%, outperforming conventional approaches and even commercial document processing solutions.
View on arXiv@article{dong2025_2505.14381, title={ SCAN: Semantic Document Layout Analysis for Textual and Visual Retrieval-Augmented Generation }, author={ Yuyang Dong and Nobuhiro Ueda and Krisztián Boros and Daiki Ito and Takuya Sera and Masafumi Oyamada }, journal={arXiv preprint arXiv:2505.14381}, year={ 2025 } }