ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06802
22
0

Training-Free Identity Preservation in Stylized Image Generation Using Diffusion Models

7 June 2025
Mohammad Ali Rezaei
Helia Hajikazem
Saeed Khanehgir
Mahdi Javanmardi
    DiffM
ArXiv (abs)PDFHTML
Main:7 Pages
8 Figures
Bibliography:4 Pages
1 Tables
Abstract

While diffusion models have demonstrated remarkable generative capabilities, existing style transfer techniques often struggle to maintain identity while achieving high-quality stylization. This limitation is particularly acute for images where faces are small or exhibit significant camera-to-face distances, frequently leading to inadequate identity preservation. To address this, we introduce a novel, training-free framework for identity-preserved stylized image synthesis using diffusion models. Key contributions include: (1) the "Mosaic Restored Content Image" technique, significantly enhancing identity retention, especially in complex scenes; and (2) a training-free content consistency loss that enhances the preservation of fine-grained content details by directing more attention to the original image during stylization. Our experiments reveal that the proposed approach substantially surpasses the baseline model in concurrently maintaining high stylistic fidelity and robust identity integrity, particularly under conditions of small facial regions or significant camera-to-face distances, all without necessitating model retraining or fine-tuning.

View on arXiv
@article{rezaei2025_2506.06802,
  title={ Training-Free Identity Preservation in Stylized Image Generation Using Diffusion Models },
  author={ Mohammad Ali Rezaei and Helia Hajikazem and Saeed Khanehgir and Mahdi Javanmardi },
  journal={arXiv preprint arXiv:2506.06802},
  year={ 2025 }
}
Comments on this paper