ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.11817
17
80

Go Wider Instead of Deeper

25 July 2021
Fuzhao Xue
Ziji Shi
Futao Wei
Yuxuan Lou
Yong Liu
Yang You
    ViT
    MoE
ArXivPDFHTML
Abstract

More transformer blocks with residual connections have recently achieved impressive results on various tasks. To achieve better performance with fewer trainable parameters, recent methods are proposed to go shallower by parameter sharing or model compressing along with the depth. However, weak modeling capacity limits their performance. Contrastively, going wider by inducing more trainable matrixes and parameters would produce a huge model requiring advanced parallelism to train and inference. In this paper, we propose a parameter-efficient framework, going wider instead of deeper. Specially, following existing works, we adapt parameter sharing to compress along depth. But, such deployment would limit the performance. To maximize modeling capacity, we scale along model width by replacing feed-forward network (FFN) with mixture-of-experts (MoE). Across transformer blocks, instead of sharing normalization layers, we propose to use individual layernorms to transform various semantic representations in a more parameter-efficient way. To evaluate our plug-and-run framework, we design WideNet and conduct comprehensive experiments on popular computer vision and natural language processing benchmarks. On ImageNet-1K, our best model outperforms Vision Transformer (ViT) by 1.5%1.5\%1.5% with 0.72×0.72 \times0.72× trainable parameters. Using 0.46×0.46 \times0.46× and 0.13×0.13 \times0.13× parameters, our WideNet can still surpass ViT and ViT-MoE by 0.8%0.8\%0.8% and 2.1%2.1\%2.1%, respectively. On four natural language processing datasets, WideNet outperforms ALBERT by 1.8%1.8\%1.8% on average and surpass BERT using factorized embedding parameterization by 0.8%0.8\%0.8% with fewer parameters.

View on arXiv
Comments on this paper