ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10622
51
7

Transformers without Normalization

13 March 2025
Jiachen Zhu
Xinlei Chen
Kaiming He
Yann LeCun
Zhuang Liu
    ViT
    OffRL
ArXivPDFHTML
Abstract

Normalization layers are ubiquitous in modern neural networks and have long been considered essential. This work demonstrates that Transformers without normalization can achieve the same or better performance using a remarkably simple technique. We introduce Dynamic Tanh (DyT), an element-wise operation DyT(DyT(DyT(x)=tanh⁡(α) = \tanh(\alpha )=tanh(αx))), as a drop-in replacement for normalization layers in Transformers. DyT is inspired by the observation that layer normalization in Transformers often produces tanh-like, SSS-shaped input-output mappings. By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning. We validate the effectiveness of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models. These findings challenge the conventional understanding that normalization layers are indispensable in modern neural networks, and offer new insights into their role in deep networks.

View on arXiv
@article{zhu2025_2503.10622,
  title={ Transformers without Normalization },
  author={ Jiachen Zhu and Xinlei Chen and Kaiming He and Yann LeCun and Zhuang Liu },
  journal={arXiv preprint arXiv:2503.10622},
  year={ 2025 }
}
Comments on this paper