ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.06951
407
100
v1v2 (latest)

DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations

Computer Vision and Pattern Recognition (CVPR), 2024
11 March 2024
Tianhao Qi
Shancheng Fang
Yanze Wu
Hongtao Xie
Jiawei Liu
Lang Chen
Qian He
Yongdong Zhang
    DiffM
ArXiv (abs)PDFHTMLHuggingFace (2 upvotes)Github (2948★)
Main:8 Pages
14 Figures
Bibliography:2 Pages
5 Tables
Appendix:3 Pages
Abstract

The diffusion-based text-to-image model harbors immense potential in transferring reference style. However, current encoder-based approaches significantly impair the text controllability of text-to-image models while transferring styles. In this paper, we introduce DEADiff to address this issue using the following two strategies: 1) a mechanism to decouple the style and semantics of reference images. The decoupled feature representations are first extracted by Q-Formers which are instructed by different text descriptions. Then they are injected into mutually exclusive subsets of cross-attention layers for better disentanglement. 2) A non-reconstructive learning method. The Q-Formers are trained using paired images rather than the identical target, in which the reference image and the ground-truth image are with the same style or semantics. We show that DEADiff attains the best visual stylization results and optimal balance between the text controllability inherent in the text-to-image model and style similarity to the reference image, as demonstrated both quantitatively and qualitatively. Our project page is https://tianhao-qi.github.io/DEADiff/.

View on arXiv
Comments on this paper