250

Explainable Agents Through Social Cues: A Review

Abstract

How to make robots explainable has experienced a surge of interest in Human-Robot Interaction (HRI) over the last three years, and, in HRI, there are many terms that refer to this concept, e.g., transparency, or legibility. One reason for this high variance in terminology is the unique array of modalities that embodied agents have access to when compared with non-embodied agents. Another reason is that, different authors use these terms in different ways. We, hence, review the existing literature in explainability and organize it by (1) providing an overview of existing definitions, by (2) showing how explainability is implemented and how it exploits different modalities, and by (3) showing how the impact of explainability is measured. Additionally, we present a list of open questions and challenges that highlight areas that require further investigation by the community. This provides the interested reader with an overview of the current state-of-the-art.

View on arXiv
Comments on this paper