56
1
v1v2 (latest)

Autocomp: LLM-Driven Code Optimization for Tensor Accelerators

Main:9 Pages
28 Figures
Bibliography:5 Pages
3 Tables
Appendix:19 Pages
Abstract

Hardware accelerators, especially those designed for tensor processing, have become ubiquitous in today's computing landscape. However, even with significant efforts in building compilers, programming these tensor accelerators remains challenging, leaving much of their potential underutilized. Recently, large language models (LLMs), trained on large amounts of code, have shown significant promise in code generation and optimization tasks, but generating low-resource languages like specialized tensor accelerator code still poses a significant challenge. We tackle this challenge with Autocomp, an approach that empowers accelerator programmers to leverage domain knowledge and hardware feedback to optimize code via an automated LLM-driven search. We accomplish this by: 1) formulating each optimization pass as a structured two-phase prompt, divided into planning and code generation phases, 2) inserting domain knowledge during planning via a concise and adaptable optimization menu, and 3) integrating correctness and performance metrics from hardware as feedback at each search iteration. Across three categories of representative workloads and two different accelerators, we demonstrate that Autocomp-optimized code runs 5.6x (GEMM) and 2.7x (convolution) faster than the vendor-provided library, and outperforms expert-level hand-tuned code by 1.4x (GEMM), 1.1x (convolution), and 1.3x (fine-grained linear algebra). Additionally, we demonstrate that optimization schedules generated from Autocomp can be reused across similar tensor operations, improving speedups by up to 24% under a fixed sample budget.

View on arXiv
@article{hong2025_2505.18574,
  title={ Autocomp: LLM-Driven Code Optimization for Tensor Accelerators },
  author={ Charles Hong and Sahil Bhatia and Alvin Cheung and Yakun Sophia Shao },
  journal={arXiv preprint arXiv:2505.18574},
  year={ 2025 }
}
Comments on this paper