5
0

Flexible Hardware-Enabled Guarantees for AI Compute

James Petrie
Onni Aarne
Nora Ammann
David Dalrymple
Main:37 Pages
1 Tables
Abstract

As artificial intelligence systems become increasingly powerful, they pose growing risks to international security, creating urgent coordination challenges that current governance approaches struggle to address without compromising sensitive information or national security. We propose flexible hardware-enabled guarantees (flexHEGs), that could be integrated with AI accelerators to enable trustworthy, privacy-preserving verification and enforcement of claims about AI development. FlexHEGs consist of an auditable guarantee processor that monitors accelerator usage and a secure enclosure providing physical tamper protection. The system would be fully open source with flexible, updateable verification capabilities. FlexHEGs could enable diverse governance mechanisms including privacy-preserving model evaluations, controlled deployment, compute limits for training, and automated safety protocol enforcement. In this first part of a three part series, we provide a comprehensive introduction of the flexHEG system, including an overview of the governance and security capabilities it offers, its potential development and adoption paths, and the remaining challenges and limitations it faces. While technically challenging, flexHEGs offer an approach to address emerging regulatory and international security challenges in frontier AI development.

View on arXiv
@article{petrie2025_2506.15093,
  title={ Flexible Hardware-Enabled Guarantees for AI Compute },
  author={ James Petrie and Onni Aarne and Nora Ammann and David Dalrymple },
  journal={arXiv preprint arXiv:2506.15093},
  year={ 2025 }
}
Comments on this paper