Diffusion models have significantly advanced text-to-image generation, laying the foundation for the development of personalized generative frameworks. However, existing methods lack precise layout controllability and overlook the potential of dynamic features of reference subjects in improving fidelity. In this work, we propose Layout-Controllable Personalized Diffusion (LCP-Diffusion) model, a novel framework that integrates subject identity preservation with flexible layout guidance in a tuning-free approach. Our model employs a Dynamic-Static Complementary Visual Refining module to comprehensively capture the intricate details of reference subjects, and introduces a Dual Layout Control mechanism to enforce robust spatial control across both training and inference stages. Extensive experiments validate that LCP-Diffusion excels in both identity preservation and layout controllability. To the best of our knowledge, this is a pioneering work enabling users to "create anything anywhere".
View on arXiv@article{li2025_2505.20909, title={ Create Anything Anywhere: Layout-Controllable Personalized Diffusion Model for Multiple Subjects }, author={ Wei Li and Hebei Li and Yansong Peng and Siying Wu and Yueyi Zhang and Xiaoyan Sun }, journal={arXiv preprint arXiv:2505.20909}, year={ 2025 } }