ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.08889
23
0

IntrinsicEdit: Precise generative image manipulation in intrinsic space

13 May 2025
Linjie Lyu
Valentin Deschaintre
Yannick Hold-Geoffroy
Miloš Hašan
Jae Shin Yoon
Thomas Leimkuhler
C. Theobalt
Iliyan Georgiev
    DiffM
ArXivPDFHTML
Abstract

Generative diffusion models have advanced image editing with high-quality results and intuitive interfaces such as prompts and semantic drawing. However, these interfaces lack precise control, and the associated methods typically specialize on a single editing task. We introduce a versatile, generative workflow that operates in an intrinsic-image latent space, enabling semantic, local manipulation with pixel precision for a range of editing operations. Building atop the RGB-X diffusion framework, we address key challenges of identity preservation and intrinsic-channel entanglement. By incorporating exact diffusion inversion and disentangled channel manipulation, we enable precise, efficient editing with automatic resolution of global illumination effects -- all without additional data collection or model fine-tuning. We demonstrate state-of-the-art performance across a variety of tasks on complex images, including color and texture adjustments, object insertion and removal, global relighting, and their combinations.

View on arXiv
@article{lyu2025_2505.08889,
  title={ IntrinsicEdit: Precise generative image manipulation in intrinsic space },
  author={ Linjie Lyu and Valentin Deschaintre and Yannick Hold-Geoffroy and Miloš Hašan and Jae Shin Yoon and Thomas Leimkühler and Christian Theobalt and Iliyan Georgiev },
  journal={arXiv preprint arXiv:2505.08889},
  year={ 2025 }
}
Comments on this paper