ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12486
2
0

Guiding Diffusion with Deep Geometric Moments: Balancing Fidelity and Variation

18 May 2025
Sangmin Jung
Utkarsh Nath
Yezhou Yang
Giulia Pedrielli
Joydeep Biswas
Amy Zhang
Hassan Ghasemzadeh
P. Turaga
    DiffM
ArXivPDFHTML
Abstract

Text-to-image generation models have achieved remarkable capabilities in synthesizing images, but often struggle to provide fine-grained control over the output. Existing guidance approaches, such as segmentation maps and depth maps, introduce spatial rigidity that restricts the inherent diversity of diffusion models. In this work, we introduce Deep Geometric Moments (DGM) as a novel form of guidance that encapsulates the subject's visual features and nuances through a learned geometric prior. DGMs focus specifically on the subject itself compared to DINO or CLIP features, which suffer from overemphasis on global image features or semantics. Unlike ResNets, which are sensitive to pixel-wise perturbations, DGMs rely on robust geometric moments. Our experiments demonstrate that DGM effectively balance control and diversity in diffusion-based image generation, allowing a flexible control mechanism for steering the diffusion process.

View on arXiv
@article{jung2025_2505.12486,
  title={ Guiding Diffusion with Deep Geometric Moments: Balancing Fidelity and Variation },
  author={ Sangmin Jung and Utkarsh Nath and Yezhou Yang and Giulia Pedrielli and Joydeep Biswas and Amy Zhang and Hassan Ghasemzadeh and Pavan Turaga },
  journal={arXiv preprint arXiv:2505.12486},
  year={ 2025 }
}
Comments on this paper