ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01564
181
0

Attention Condensation via Sparsity Induced Regularized Training

3 March 2025
Eli Sason
Darya Frolova
Boris Nazarov
Felix Goldberd
ArXivPDFHTML
Abstract

As the context window expands, self-attention increasingly dominates the transformer's inference time. Therefore, accelerating attention computation while minimizing performance degradation is essential for the efficient deployment of Large Language Models (LLMs). In this study we extend a theoretical framework of attention sparsity in LLMs. A customized loss function is designed to enforce the sparsity by restricting the number of top elements in the attention matrix. We perform an initial set of evaluations with GPT-2 to show the effectiveness of our sparsification approach. The attention matrices of the models trained with the proposed loss are both sparse and effective in capturing relevant input dependencies. We now continue working to demonstrate the value of our approach on larger models and different architectures.

View on arXiv
@article{sason2025_2503.01564,
  title={ Attention Condensation via Sparsity Induced Regularized Training },
  author={ Eli Sason and Darya Frolova and Boris Nazarov and Felix Goldberd },
  journal={arXiv preprint arXiv:2503.01564},
  year={ 2025 }
}
Comments on this paper