ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.10246
17
0

SDBERT: SparseDistilBERT, a faster and smaller BERT model

28 July 2022
Devaraju Vinoda
P. K. Yadav
    VLM
    MQ
ArXivPDFHTML
Abstract

In this work we introduce a new transformer architecture called SparseDistilBERT (SDBERT), which is a combination of sparse attention and knowledge distillantion (KD). We implemented sparse attention mechanism to reduce quadratic dependency on input length to linear. In addition to reducing computational complexity of the model, we used knowledge distillation (KD). We were able to reduce the size of BERT model by 60% while retaining 97% performance and it only took 40% of time to train.

View on arXiv
Comments on this paper