A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Ahmed Magooda
Alec Helyar
Kyle Jackson
David Sullivan
Chad Atalla
Emily Sheng
Dan Vann
Richard Edgar
Hamid Palangi
Roman Lutz
Hongliang Kong
Vincent Yun
Eslam Kamal
Federico Zarfati
Hanna Wallach
Sarah Bird
Mei Chen

Abstract
We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products and services. Our framework for automatically measuring harms from LLMs builds on existing technical and sociotechnical expertise and leverages the capabilities of state-of-the-art LLMs, such as GPT-4. We use this framework to run through several case studies investigating how different LLMs may violate a range of RAI-related principles. The framework may be employed alongside domain-specific sociotechnical expertise to create measurements for new harm areas in the future. By implementing this framework, we aim to enable more advanced harm measurement efforts and further the responsible use of LLMs.
View on arXivComments on this paper