10
0

Robustness Evaluation of OCR-based Visual Document Understanding under Multi-Modal Adversarial Attacks

Main:6 Pages
2 Figures
Bibliography:2 Pages
8 Tables
Abstract

Visual Document Understanding (VDU) systems have achieved strong performance in information extraction by integrating textual, layout, and visual signals. However, their robustness under realistic adversarial perturbations remains insufficiently explored. We introduce the first unified framework for generating and evaluating multi-modal adversarial attacks on OCR-based VDU models. Our method covers six gradient-based layout attack scenarios, incorporating manipulations of OCR bounding boxes, pixels, and texts across both word and line granularities, with constraints on layout perturbation budget (e.g., IoU >= 0.6) to preserve plausibility.Experimental results across four datasets (FUNSD, CORD, SROIE, DocVQA) and six model families demonstrate that line-level attacks and compound perturbations (BBox + Pixel + Text) yield the most severe performance degradation. Projected Gradient Descent (PGD)-based BBox perturbations outperform random-shift baselines in all investigated models. Ablation studies further validate the impact of layout budget, text modification, and adversarial transferability.

View on arXiv
@article{tien2025_2506.16407,
  title={ Robustness Evaluation of OCR-based Visual Document Understanding under Multi-Modal Adversarial Attacks },
  author={ Dong Nguyen Tien and Dung D. Le },
  journal={arXiv preprint arXiv:2506.16407},
  year={ 2025 }
}
Comments on this paper