17
66

LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation

Jiaxin Cheng
Xiao Liang
Xingjian Shi
Tong He
Tianjun Xiao
Mu Li
Abstract

Layout-to-image generation refers to the task of synthesizing photo-realistic images based on semantic layouts. In this paper, we propose LayoutDiffuse that adapts a foundational diffusion model pretrained on large-scale image or text-image datasets for layout-to-image generation. By adopting a novel neural adaptor based on layout attention and task-aware prompts, our method trains efficiently, generates images with both high perceptual quality and layout alignment, and needs less data. Experiments on three datasets show that our method significantly outperforms other 10 generative models based on GANs, VQ-VAE, and diffusion models.

View on arXiv
Comments on this paper