ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.05169
13
10

Sparsifying Transformer Models with Trainable Representation Pooling

10 September 2020
Michal Pietruszka
Łukasz Borchmann
Lukasz Garncarek
ArXivPDFHTML
Abstract

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-kkk operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling, we can retain its top quality, while being 1.8×1.8\times1.8× faster during training, 4.5×4.5\times4.5× faster during inference, and up to 13×13\times13× more computationally efficient in the decoder.

View on arXiv
Comments on this paper