Evaluating Crowdsourcing Participants in the Absence of Ground-Truth

Abstract
Given a supervised/semi-supervised learning scenario where multiple annotators are available, we consider the problem of identification of adversarial or unreliable annotators.
View on arXivComments on this paper