ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01566
73
0

FlexiSAGA: A Flexible Systolic Array GEMM Accelerator for Sparse and Dense Processing

2 June 2025
Mika Markus Müller
Konstantin Lubeck
Alexander Louis-Ferdinand Jung
Jannik Steinmetz
Oliver Bringmann
    GNN
ArXiv (abs)PDFHTML
Main:14 Pages
13 Figures
Appendix:2 Pages
Abstract

Artificial Intelligence (AI) algorithms, such as Deep Neural Networks (DNNs), have become an important tool for a wide range of applications, from computer vision to natural language processing. However, the computational complexity of DNN inference poses a significant challenge, particularly for processing on resource-constrained edge devices. One promising approach to address this challenge is the exploitation of sparsity in DNN operator weights.In this work, we present FlexiSAGA, an architecturally configurable and dataflow-flexible AI hardware accelerator for the sparse and dense processing of general matrix multiplications (GEMMs). FlexiSAGA supports seven different sparse and dense dataflows, enabling efficient processing of resource intensive DNN operators. Additionally, we propose a DNN pruning method specifically tailored towards the FlexiSAGA architecture, allowing for near-optimal processing of dense and sparse convolution and fully-connected operators, facilitating a DNN/HW co-design flow. Our results show a whole DNN sparse-over-dense inference speedup ranging from 1.41 up to 4.28, outperforming commercial and literature-reported accelerator platforms.

View on arXiv
@article{müller2025_2506.01566,
  title={ FlexiSAGA: A Flexible Systolic Array GEMM Accelerator for Sparse and Dense Processing },
  author={ Mika Markus Müller and Konstantin Lübeck and Alexander Louis-Ferdinand Jung and Jannik Steinmetz and Oliver Bringmann },
  journal={arXiv preprint arXiv:2506.01566},
  year={ 2025 }
}
Comments on this paper