ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.11972
25
126

Do Transformer Modifications Transfer Across Implementations and Applications?

23 February 2021
Sharan Narang
Hyung Won Chung
Yi Tay
W. Fedus
Thibault Févry
Michael Matena
Karishma Malkan
Noah Fiedel
Noam M. Shazeer
Zhenzhong Lan
Yanqi Zhou
Wei Li
Nan Ding
Jake Marcus
Adam Roberts
Colin Raffel
ArXivPDFHTML
Abstract

The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.

View on arXiv
Comments on this paper