82
0

Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects

Abstract

Diffusion models achieved unprecedented fidelity and diversity for synthesizing image, video, 3D assets, etc. However, subject mixing is an unresolved issue for diffusion-based image synthesis, particularly for synthesizing multiple similar-looking subjects. We propose Self-Cross Diffusion Guidance to penalize the overlap between cross-attention maps and the aggregated self-attention map. Compared to previous methods based on self-attention or cross-attention alone, our guidance is more effective in eliminating subject mixing. What's more, our guidance addresses subject mixing for all relevant patches beyond the most discriminant one, e.g., the beak of a bird. For each subject, we aggregate self-attention maps of patches with higher cross-attention values. Thus, the aggregated self-attention map forms a region that the whole subject attends to. Our training-free method boosts the performance of both Unet-based and Transformer-based diffusion models such as the Stable Diffusion series. We also release a similar subjects dataset (SSD), a challenging benchmark, and utilize GPT-4o for automatic and reliable evaluation. Extensive qualitative and quantitative results demonstrate the effectiveness of our self-cross diffusion guidance.

View on arXiv
@article{qiu2025_2411.18936,
  title={ Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects },
  author={ Weimin Qiu and Jieke Wang and Meng Tang },
  journal={arXiv preprint arXiv:2411.18936},
  year={ 2025 }
}
Comments on this paper