113
5

MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models

Main:7 Pages
18 Figures
Bibliography:3 Pages
7 Tables
Appendix:6 Pages
Abstract

Recent text-to-image models generate high-quality images from text prompts but lack precise control over specific components within visual concepts. Therefore, we introduce component-controllable personalization, a new task that allows users to customize and reconfigure individual components within concepts. This task faces two challenges: semantic pollution, where undesirable elements distort the concept, and semantic imbalance, which leads to disproportionate learning of the target concept and component. To address these, we design MagicTailor, a framework that uses Dynamic Masked Degradation to adaptively perturb unwanted visual semantics and Dual-Stream Balancing for more balanced learning of desired visual semantics. The experimental results show that MagicTailor outperforms existing methods in this task and enables more personalized, nuanced, and creative image generation.

View on arXiv
@article{zhou2025_2410.13370,
  title={ MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models },
  author={ Donghao Zhou and Jiancheng Huang and Jinbin Bai and Jiaze Wang and Hao Chen and Guangyong Chen and Xiaowei Hu and Pheng-Ann Heng },
  journal={arXiv preprint arXiv:2410.13370},
  year={ 2025 }
}
Comments on this paper