ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06826
16
0

Controllable Coupled Image Generation via Diffusion Models

7 June 2025
Chenfei Yuan
Nanshan Jia
Hangqi Li
Peter W. Glynn
Zeyu Zheng
    DiffM
ArXiv (abs)PDFHTML
Main:9 Pages
11 Figures
Bibliography:5 Pages
4 Tables
Appendix:8 Pages
Abstract

We provide an attention-level control method for the task of coupled image generation, where "coupled" means that multiple simultaneously generated images are expected to have the same or very similar backgrounds. While backgrounds coupled, the centered objects in the generated images are still expected to enjoy the flexibility raised from different text prompts. The proposed method disentangles the background and entity components in the model's cross-attention modules, attached with a sequence of time-varying weight control parameters depending on the time step of sampling. We optimize this sequence of weight control parameters with a combined objective that assesses how coupled the backgrounds are as well as text-to-image alignment and overall visual quality. Empirical results demonstrate that our method outperforms existing approaches across these criteria.

View on arXiv
@article{yuan2025_2506.06826,
  title={ Controllable Coupled Image Generation via Diffusion Models },
  author={ Chenfei Yuan and Nanshan Jia and Hangqi Li and Peter W. Glynn and Zeyu Zheng },
  journal={arXiv preprint arXiv:2506.06826},
  year={ 2025 }
}
Comments on this paper