23
0

Don't Forget your Inverse DDIM for Image Editing

Abstract

The field of text-to-image generation has undergone significant advancements with the introduction of diffusion models. Nevertheless, the challenge of editing real images persists, as most methods are either computationally intensive or produce poor reconstructions. This paper introduces SAGE (Self-Attention Guidance for image Editing) - a novel technique leveraging pre-trained diffusion models for image editing. SAGE builds upon the DDIM algorithm and incorporates a novel guidance mechanism utilizing the self-attention layers of the diffusion U-Net. This mechanism computes a reconstruction objective based on attention maps generated during the inverse DDIM process, enabling efficient reconstruction of unedited regions without the need to precisely reconstruct the entire input image. Thus, SAGE directly addresses the key challenges in image editing. The superiority of SAGE over other methods is demonstrated through quantitative and qualitative evaluations and confirmed by a statistically validated comprehensive user study, in which all 47 surveyed users preferred SAGE over competing methods. Additionally, SAGE ranks as the top-performing method in seven out of 10 quantitative analyses and secures second and third places in the remaining three.

View on arXiv
@article{gomez-trenado2025_2505.09571,
  title={ Don't Forget your Inverse DDIM for Image Editing },
  author={ Guillermo Gomez-Trenado and Pablo Mesejo and Oscar Cordón and Stéphane Lathuilière },
  journal={arXiv preprint arXiv:2505.09571},
  year={ 2025 }
}
Comments on this paper