463

AIPerf: Automated machine learning as an AI-HPC benchmark

Abstract

The plethora of complex artificial intelligence (AI) algorithms and available high performance computing (HPC) power stimulates the expeditious development of AI components in both hardware and software domains. Existing HPC and AI benchmarks fail to cover the variety of heterogeneous systems while providing a simple yet comprehensive measurement of the cross-stack performance. To address the challenges, we propose an end-to-end benchmark suite utilizing automated machine learning (AutoML) as a representative AI application. The extreme computational cost and scalability make AutoML a desired workload for benchmarking AI-HPC. We implement the algorithms in a highly parallel and flexible way to ensure the efficiency and customizability on diverse systems. The major metric to quantify the system performance is floating-point operations per second (FLOPS), which is measured in a systematic and analytical approach. We verify the benchmark's stability at discrete timestamps on different types and scales of machines equipped with up to 400 AI accelerators. Our evaluation show the benchmark scores scale linearly with the number of machines and reflect the overall computing power on AI. The source code, specifications and detailed procedures are publicly accessible on GitHub.

View on arXiv
Comments on this paper