ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00424
37
0
v1v2 (latest)

COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning

31 May 2025
Chamika Sudusinghe
Gerasimos Gerogiannis Damitha Lenadora
Damitha Sandeepa Lenadora
Charles Block
Josep Torrellas
ArXiv (abs)PDFHTML
Main:9 Pages
15 Figures
Bibliography:3 Pages
6 Tables
Appendix:6 Pages
Abstract

Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variations in sparse inputs, and early-stage accelerators rely on expensive simulators. Therefore, ML-based cost models used for optimizing such programs on general-purpose hardware are often ineffective for early-stage accelerators, as they require large datasets for proper training. To this end, we introduce COGNATE, a novel framework that leverages inexpensive data samples from general-purpose hardware (e.g., CPUs) to train cost models, followed by few-shot fine-tuning on emerging hardware. COGNATE exploits the homogeneity of input features across hardware platforms while effectively mitigating heterogeneity, enabling cost model training with just 5% of the data samples needed by accelerator-specific models to achieve comparable performance. We conduct extensive experiments to demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47x (up to 5.46x) for SpMM and 1.39x (up to 4.22x) for SDDMM.

View on arXiv
@article{sudusinghe2025_2506.00424,
  title={ COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning },
  author={ Chamika Sudusinghe and Gerasimos Gerogiannis and Damitha Lenadora and Charles Block and Josep Torrellas and Charith Mendis },
  journal={arXiv preprint arXiv:2506.00424},
  year={ 2025 }
}
Comments on this paper