ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21136
71
6

SageAttention2++: A More Efficient Implementation of SageAttention2

27 May 2025
Jintao Zhang
Xiaoming Xu
Jia Wei
Haofeng Huang
Pengle Zhang
Chendong Xiang
Jun Zhu
Jianfei Chen
    MQ
    VLM
ArXivPDFHTML
Abstract

The efficiency of attention is critical because its time complexity grows quadratically with sequence length. SageAttention2 addresses this by utilizing quantization to accelerate matrix multiplications (Matmul) in attention. To further accelerate SageAttention2, we propose to utilize the faster instruction of FP8 Matmul accumulated in FP16. The instruction is 2x faster than the FP8 Matmul used in SageAttention2. Our experiments show that SageAttention2++ achieves a 3.9x speedup over FlashAttention while maintaining the same attention accuracy as SageAttention2. This means SageAttention2++ effectively accelerates various models, including those for language, image, and video generation, with negligible end-to-end metrics loss. The code will be available atthis https URL.

View on arXiv
@article{zhang2025_2505.21136,
  title={ SageAttention2++: A More Efficient Implementation of SageAttention2 },
  author={ Jintao Zhang and Xiaoming Xu and Jia Wei and Haofeng Huang and Pengle Zhang and Chendong Xiang and Jun Zhu and Jianfei Chen },
  journal={arXiv preprint arXiv:2505.21136},
  year={ 2025 }
}
Comments on this paper