ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.19534
8
8

Recasting Self-Attention with Holographic Reduced Representations

31 May 2023
Mohammad Mahmudul Alam
Edward Raff
Stella Biderman
Tim Oates
James Holt
ArXivPDFHTML
Abstract

In recent years, self-attention has become the dominant paradigm for sequence modeling in a variety of domains. However, in domains with very long sequence lengths the O(T2)\mathcal{O}(T^2)O(T2) memory and O(T2H)\mathcal{O}(T^2 H)O(T2H) compute costs can make using transformers infeasible. Motivated by problems in malware detection, where sequence lengths of T≥100,000T \geq 100,000T≥100,000 are a roadblock to deep learning, we re-cast self-attention using the neuro-symbolic approach of Holographic Reduced Representations (HRR). In doing so we perform the same high-level strategy of the standard self-attention: a set of queries matching against a set of keys, and returning a weighted response of the values for each key. Implemented as a ``Hrrformer'' we obtain several benefits including O(THlog⁡H)\mathcal{O}(T H \log H)O(THlogH) time complexity, O(TH)\mathcal{O}(T H)O(TH) space complexity, and convergence in 10×10\times10× fewer epochs. Nevertheless, the Hrrformer achieves near state-of-the-art accuracy on LRA benchmarks and we are able to learn with just a single layer. Combined, these benefits make our Hrrformer the first viable Transformer for such long malware classification sequences and up to 280×280\times280× faster to train on the Long Range Arena benchmark. Code is available at \url{https://github.com/NeuromorphicComputationResearchProgram/Hrrformer}

View on arXiv
Comments on this paper