Despite the growing adoption of electronic health records, many processes still rely on paper documents, reflecting the heterogeneous real-world conditions in which healthcare is delivered. The manual transcription process is time-consuming and prone to errors when transferring paper-based data to digital formats. To streamline this workflow, this study presents an open-source pipeline that extracts and categorizes checkbox data from scanned documents. Demonstrated on transfusion reaction reports, the design supports adaptation to other checkbox-rich document types. The proposed method integrates checkbox detection, multilingual optical character recognition (OCR) and multilingual vision-language models (VLMs). The pipeline achieves high precision and recall compared against annually compiled gold-standards from 2017 to 2024. The result is a reduction in administrative workload and accurate regulatory reporting. The open-source availability of this pipeline encourages self-hosted parsing of checkbox forms.
View on arXiv@article{schäfer2025_2504.20220, title={ A Multimodal Pipeline for Clinical Data Extraction: Applying Vision-Language Models to Scans of Transfusion Reaction Reports }, author={ Henning Schäfer and Cynthia S. Schmidt and Johannes Wutzkowsky and Kamil Lorek and Lea Reinartz and Johannes Rückert and Christian Temme and Britta Böckmann and Peter A. Horn and Christoph M. Friedrich }, journal={arXiv preprint arXiv:2504.20220}, year={ 2025 } }