ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.11710
22
7

Controlled and Conditional Text to Image Generation with Diffusion Prior

23 February 2023
Pranav Aggarwal
Hareesh Ravi
Naveen Marri
Sachin Kelkar
F. Chen
V. Khuc
Midhun Harikumar
Ritiz Tambi
Sudharshan Reddy Kakumanu
Purvak Lapsiya
Alvin Ghouas
Sarah Saber
Malavika Ramprasad
Baldo Faieta
Ajinkya Kale
    DiffM
ArXivPDFHTML
Abstract

Denoising Diffusion models have shown remarkable performance in generating diverse, high quality images from text. Numerous techniques have been proposed on top of or in alignment with models like Stable Diffusion and Imagen that generate images directly from text. A lesser explored approach is DALLE-2's two step process comprising a Diffusion Prior that generates a CLIP image embedding from text and a Diffusion Decoder that generates an image from a CLIP image embedding. We explore the capabilities of the Diffusion Prior and the advantages of an intermediate CLIP representation. We observe that Diffusion Prior can be used in a memory and compute efficient way to constrain the generation to a specific domain without altering the larger Diffusion Decoder. Moreover, we show that the Diffusion Prior can be trained with additional conditional information such as color histogram to further control the generation. We show quantitatively and qualitatively that the proposed approaches perform better than prompt engineering for domain specific generation and existing baselines for color conditioned generation. We believe that our observations and results will instigate further research into the diffusion prior and uncover more of its capabilities.

View on arXiv
Comments on this paper