36
15

A Principles-based Ethical Assurance Argument for AI and Autonomous Systems

Abstract

An assurance case presents a clear and defensible argument, supported by evidence, that a system will operate as intended in a particular context. Typically, an assurance case presents an argument that a system will be acceptably safe in its intended context. One emerging proposal within the Trustworthy AI research community is to extend and apply this methodology to provide assurance that the use of an AI system or an autonomous system (AI/AS) will be acceptably ethical in a particular context. In this paper, we advance this proposal further. We do so by presenting a principles-based ethical assurance (PBEA) argument pattern for AI/AS. The PBEA argument pattern offers a framework for reasoning about the overall ethical acceptability of the use of a given AI/AS and it could be an early prototype template for specific ethical assurance cases. The four core ethical principles that form the basis of the PBEA argument pattern are: justice; beneficence; non-maleficence; and respect for personal autonomy. Throughout, we connect stages of the argument pattern to examples of AI/AS applications. This helps to show its initial plausibility.

View on arXiv
Comments on this paper