ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21179
26
0
v1v2v3 (latest)

Normalized Attention Guidance: Universal Negative Guidance for Diffusion Models

27 May 2025
Dar-Yen Chen
Hmrishav Bandyopadhyay
Kai Zou
Yi-Zhe Song
ArXiv (abs)PDFHTML
Main:9 Pages
26 Figures
Bibliography:3 Pages
11 Tables
Appendix:15 Pages
Abstract

Negative guidance -- explicitly suppressing unwanted attributes -- remains a fundamental challenge in diffusion models, particularly in few-step sampling regimes. While Classifier-Free Guidance (CFG) works well in standard settings, it fails under aggressive sampling step compression due to divergent predictions between positive and negative branches. We present Normalized Attention Guidance (NAG), an efficient, training-free mechanism that applies extrapolation in attention space with L1-based normalization and refinement. NAG restores effective negative guidance where CFG collapses while maintaining fidelity. Unlike existing approaches, NAG generalizes across architectures (UNet, DiT), sampling regimes (few-step, multi-step), and modalities (image, video), functioning as a \textit{universal} plug-in with minimal computational overhead. Through extensive experimentation, we demonstrate consistent improvements in text alignment (CLIP Score), fidelity (FID, PFID), and human-perceived quality (ImageReward). Our ablation studies validate each design component, while user studies confirm significant preference for NAG-guided outputs. As a model-agnostic inference-time approach requiring no retraining, NAG provides effortless negative guidance for all modern diffusion frameworks -- pseudocode in the Appendix!

View on arXiv
@article{chen2025_2505.21179,
  title={ Normalized Attention Guidance: Universal Negative Guidance for Diffusion Models },
  author={ Dar-Yen Chen and Hmrishav Bandyopadhyay and Kai Zou and Yi-Zhe Song },
  journal={arXiv preprint arXiv:2505.21179},
  year={ 2025 }
}
Comments on this paper