22
0

Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU

Abstract

In today's era of rapid technological advancement, artificial intelligence (AI) applications require large-scale, high-performance, and data-intensive computations, leading to significant energy demands. Addressing this challenge necessitates a combined approach involving both hardware and software innovations. Hardware manufacturers are developing new, efficient, and specialized solutions, with the RISC-V architecture emerging as a prominent player due to its open, extensible, and energy-efficient instruction set architecture (ISA). Simultaneously, software developers are creating new algorithms and frameworks, yet their energy efficiency often remains unclear. In this study, we conduct a comprehensive benchmark analysis of machine learning (ML) applications on the 64-core SOPHON SG2042 RISC-V architecture. We specifically analyze the energy consumption of deep learning inference models across three leading AI frameworks: PyTorch, ONNX Runtime, and TensorFlow. Our findings show that frameworks using the XNNPACK back-end, such as ONNX Runtime and TensorFlow, consume less energy compared to PyTorch, which is compiled with the native OpenBLAS back-end.

View on arXiv
@article{malenza2025_2504.03774,
  title={ Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU },
  author={ Giulio Malenza and Francesco Targa and Adriano Marques Garcia and Marco Aldinucci and Robert Birke },
  journal={arXiv preprint arXiv:2504.03774},
  year={ 2025 }
}
Comments on this paper