: Generating Instructional Illustrations via Text-Conditioned Diffusion

The effective communication of procedural knowledge remains a significant challenge in natural language processing (NLP), as purely textual instructions often fail to convey complex physical actions and spatial relationships. We address this limitation by proposing a language-driven framework that translates procedural text into coherent visual instructions. Our approach models the linguistic structure of instructional content by decomposing it into goal statements and sequential steps, then conditioning visual generation on these linguistic elements. We introduce three key innovations: (1) a constituency parser-based text encoding mechanism that preserves semantic completeness even with lengthy instructions, (2) a pairwise discourse coherence model that maintains consistency across instruction sequences, and (3) a novel evaluation protocol specifically designed for procedural language-to-image alignment. Our experiments across three instructional datasets (HTStep, CaptainCook4D, and WikiAll) demonstrate that our method significantly outperforms existing baselines in generating visuals that accurately reflect the linguistic content and sequential nature of instructions. This work contributes to the growing body of research on grounding procedural language in visual content, with applications spanning education, task guidance, and multimodal language understanding.
View on arXiv@article{bi2025_2505.16425, title={ $I^2G$: Generating Instructional Illustrations via Text-Conditioned Diffusion }, author={ Jing Bi and Pinxin Liu and Ali Vosoughi and Jiarui Wu and Jinxi He and Chenliang Xu }, journal={arXiv preprint arXiv:2505.16425}, year={ 2025 } }