ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.07911
28
0

Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention

11 October 2023
Huiyin Xue
Nikolaos Aletras
ArXivPDFHTML
Abstract

Scaling pre-trained language models has resulted in large performance gains in various natural language processing tasks but comes with a large cost in memory requirements. Inspired by the position embeddings in transformers, we aim to simplify and reduce the memory footprint of the multi-head attention (MHA) mechanism. We propose an alternative module that uses only a single shared projection matrix and multiple head embeddings (MHE), i.e. one per head. We empirically demonstrate that our MHE attention is substantially more memory efficient compared to alternative attention mechanisms while achieving high predictive performance retention ratio to vanilla MHA on several downstream tasks. MHE attention only requires a negligible fraction of additional parameters (3nd3nd3nd, where nnn is the number of attention heads and ddd the size of the head embeddings) compared to a single-head attention, while MHA requires (3n2−3n)d2−3nd(3n^2-3n)d^2-3nd(3n2−3n)d2−3nd additional parameters.

View on arXiv
Comments on this paper