55

DocDjinn: Controllable Synthetic Document Generation with VLMs and Handwriting Diffusion

Marcel Lamott
Saifullah Saifullah
Nauman Riaz
Yves-Noel Weweler
Tobias Alt-Veit
Ahmad Sarmad Ali
Muhammad Armaghan Shakir
Adrian Kalwa
Momina Moetesum
Andreas Dengel
Sheraz Ahmed
Faisal Shafait
Ulrich Schwanecke
Adrian Ulges
Main:13 Pages
37 Figures
21 Tables
Appendix:46 Pages
Abstract

Effective document intelligence models rely on large amounts of annotated training data. However, procuring sufficient and high-quality data poses significant challenges due to the labor-intensive and costly nature of data acquisition. Additionally, leveraging language models to annotate real documents raises concerns about data privacy. Synthetic document generation has emerged as a promising, privacy-preserving alternative. We propose DocDjinn, a novel framework for controllable synthetic document generation using Vision-Language Models (VLMs) that produces annotated documents from unlabeled seed samples. Our approach generates visually plausible and semantically consistent synthetic documents that follow the distribution of an existing source dataset through clustering-based seed selection with parametrized sampling. By enriching documents with realistic diffusion-based handwriting and contextual visual elements via semantic-visual decoupling, we generate diverse, high-quality annotated synthetic documents. We evaluate across eleven benchmarks spanning key information extraction, question answering, document classification, and document layout analysis. To our knowledge, this is the first work demonstrating that VLMs can generate faithful annotated document datasets at scale from unlabeled seeds that can effectively enrich or approximate real, manually annotated data for diverse document understanding tasks. We show that with only 100 real training samples, our framework achieves on average 87%87\% of the performance of the full real-world dataset. We publicly release our code and 140k+ synthetic document samples.

View on arXiv
Comments on this paper