47
16

To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures

Abstract

While automatic performance metrics are crucial for machine learn-ing of artificial human-like behaviour, the gold standard for eval-uation remains human judgement. The subjective evaluation ofartificial human-like behaviour in embodied conversational agentsis however expensive and little is known about the quality of thedata it returns. Two approaches to subjective evaluation can belargely distinguished, one relying on ratings, the other on pairwisecomparisons. In this study we use co-speech gestures to comparethe two against each other and answer questions about their appro-priateness for evaluation of artificial behaviour. We consider theirability to rate quality, but also aspects pertaining to the effort of useand the time required to collect subjective data. We use crowd sourc-ing to rate the quality of co-speech gestures in avatars, assessingwhich method picks up more detail in subjective assessments. Wecompared gestures generated by three different machine learningmodels with various level of behavioural quality. We found thatboth approaches were able to rank the videos according to qualityand that the ranking significantly correlated, showing that in termsof quality there is no preference of one method over the other. Wealso found that pairwise comparisons were slightly faster and camewith improved inter-rater reliability, suggesting that for small-scalestudies pairwise comparisons are to be favoured over ratings.

View on arXiv
Comments on this paper