ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20909
23
0

Create Anything Anywhere: Layout-Controllable Personalized Diffusion Model for Multiple Subjects

27 May 2025
Wei Li
Hebei Li
Yansong Peng
Siying Wu
Yueyi Zhang
Xiaoyan Sun
    DiffM
ArXivPDFHTML
Abstract

Diffusion models have significantly advanced text-to-image generation, laying the foundation for the development of personalized generative frameworks. However, existing methods lack precise layout controllability and overlook the potential of dynamic features of reference subjects in improving fidelity. In this work, we propose Layout-Controllable Personalized Diffusion (LCP-Diffusion) model, a novel framework that integrates subject identity preservation with flexible layout guidance in a tuning-free approach. Our model employs a Dynamic-Static Complementary Visual Refining module to comprehensively capture the intricate details of reference subjects, and introduces a Dual Layout Control mechanism to enforce robust spatial control across both training and inference stages. Extensive experiments validate that LCP-Diffusion excels in both identity preservation and layout controllability. To the best of our knowledge, this is a pioneering work enabling users to "create anything anywhere".

View on arXiv
@article{li2025_2505.20909,
  title={ Create Anything Anywhere: Layout-Controllable Personalized Diffusion Model for Multiple Subjects },
  author={ Wei Li and Hebei Li and Yansong Peng and Siying Wu and Yueyi Zhang and Xiaoyan Sun },
  journal={arXiv preprint arXiv:2505.20909},
  year={ 2025 }
}
Comments on this paper