ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.10397
33
10
v1v2v3 (latest)

Needle in a Haystack: An Analysis of Finding Qualified Workers on MTurk for Summarization

20 December 2022
Lining Zhang
João Sedoc
Yufang Hou
Daniel Deutsch
Elizabeth Clark
Yixin Liu
Saad Mahamood
Sebastian Gehrmann
Miruna Clinciu
Khyathi Chandu
João Sedoc
ArXiv (abs)PDFHTML
Abstract

The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.

View on arXiv
Comments on this paper