25
0

Parallel Rescaling: Rebalancing Consistency Guidance for Personalized Diffusion Models

Main:4 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Appendix:3 Pages
Abstract

Personalizing diffusion models to specific users or concepts remains challenging, particularly when only a few reference images are available. Existing methods such as DreamBooth and Textual Inversion often overfit to limited data, causing misalignment between generated images and text prompts when attempting to balance identity fidelity with prompt adherence. While Direct Consistency Optimization (DCO) with its consistency-guided sampling partially alleviates this issue, it still struggles with complex or stylized prompts. In this paper, we propose a parallel rescaling technique for personalized diffusion models. Our approach explicitly decomposes the consistency guidance signal into parallel and orthogonal components relative to classifier free guidance (CFG). By rescaling the parallel component, we minimize disruptive interference with CFG while preserving the subject's identity. Unlike prior personalization methods, our technique does not require additional training data or expensive annotations. Extensive experiments show improved prompt alignment and visual fidelity compared to baseline methods, even on challenging stylized prompts. These findings highlight the potential of parallel rescaled guidance to yield more stable and accurate personalization for diverse user inputs.

View on arXiv
@article{chae2025_2506.00607,
  title={ Parallel Rescaling: Rebalancing Consistency Guidance for Personalized Diffusion Models },
  author={ JungWoo Chae and Jiyoon Kim and Sangheum Hwang },
  journal={arXiv preprint arXiv:2506.00607},
  year={ 2025 }
}
Comments on this paper