62
0

PartComposer: Learning and Composing Part-Level Concepts from Single-Image Examples

Main:7 Pages
26 Figures
Bibliography:1 Pages
3 Tables
Appendix:12 Pages
Abstract

We present PartComposer: a framework for part-level concept learning from single-image examples that enables text-to-image diffusion models to compose novel objects from meaningful components. Existing methods either struggle with effectively learning fine-grained concepts or require a large dataset as input. We propose a dynamic data synthesis pipeline generating diverse part compositions to address one-shot data scarcity. Most importantly, we propose to maximize the mutual information between denoised latents and structured concept codes via a concept predictor, enabling direct regulation on concept disentanglement and re-composition supervision. Our method achieves strong disentanglement and controllable composition, outperforming subject and part-level baselines when mixing concepts from the same, or different, object categories.

View on arXiv
@article{liu2025_2506.03004,
  title={ PartComposer: Learning and Composing Part-Level Concepts from Single-Image Examples },
  author={ Junyu Liu and R. Kenny Jones and Daniel Ritchie },
  journal={arXiv preprint arXiv:2506.03004},
  year={ 2025 }
}
Comments on this paper