72

Qwen-Image-Layered: Towards Inherent Editability via Layer Decomposition

Shengming Yin
Zekai Zhang
Zecheng Tang
Kaiyuan Gao
Xiao Xu
Kun Yan
Jiahao Li
Yilei Chen
Yuxiang Chen
Heung-Yeung Shum
Lionel M. Ni
Jingren Zhou
Junyang Lin
Chenfei Wu
Main:10 Pages
8 Figures
Bibliography:2 Pages
3 Tables
Abstract

Recent visual generative models often struggle with consistency during image editing due to the entangled nature of raster images, where all visual content is fused into a single canvas. In contrast, professional design tools employ layered representations, allowing isolated edits while preserving consistency. Motivated by this, we propose \textbf{Qwen-Image-Layered}, an end-to-end diffusion model that decomposes a single RGB image into multiple semantically disentangled RGBA layers, enabling \textbf{inherent editability}, where each RGBA layer can be independently manipulated without affecting other content. To support variable-length decomposition, we introduce three key components: (1) an RGBA-VAE to unify the latent representations of RGB and RGBA images; (2) a VLD-MMDiT (Variable Layers Decomposition MMDiT) architecture capable of decomposing a variable number of image layers; and (3) a Multi-stage Training strategy to adapt a pretrained image generation model into a multilayer image decomposer. Furthermore, to address the scarcity of high-quality multilayer training images, we build a pipeline to extract and annotate multilayer images from Photoshop documents (PSD). Experiments demonstrate that our method significantly surpasses existing approaches in decomposition quality and establishes a new paradigm for consistent image editing. Our code and models are released on \href{this https URL}{this https URL}

View on arXiv
Comments on this paper