Diffusion-based G-buffer generation and rendering

Despite recent advances in text-to-image generation, controlling geometric layout and material properties in synthesized scenes remains challenging. We present a novel pipeline that first produces a G-buffer (albedo, normals, depth, roughness, and metallic) from a text prompt and then renders a final image through a modular neural network. This intermediate representation enables fine-grained editing: users can copy and paste within specific G-buffer channels to insert or reposition objects, or apply masks to the irradiance channel to adjust lighting locally. As a result, real objects can be seamlessly integrated into virtual scenes, and virtual objects can be placed into real environments with high fidelity. By separating scene decomposition from image rendering, our method offers a practical balance between detailed post-generation control and efficient text-driven synthesis. We demonstrate its effectiveness on a variety of examples, showing that G-buffer editing significantly extends the flexibility of text-guided image generation.
View on arXiv@article{xue2025_2503.15147, title={ Diffusion-based G-buffer generation and rendering }, author={ Bowen Xue and Giuseppe Claudio Guarnera and Shuang Zhao and Zahra Montazeri }, journal={arXiv preprint arXiv:2503.15147}, year={ 2025 } }