ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.15056
43
861

ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks

27 March 2023
Fabrizio Gilardi
Meysam Alizadeh
M. Kubli
    AI4MH
ArXivPDFHTML
Abstract

Many NLP applications require manual data annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd-workers on platforms such as MTurk as well as trained annotators, such as research assistants. Using a sample of 2,382 tweets, we demonstrate that ChatGPT outperforms crowd-workers for several annotation tasks, including relevance, stance, topics, and frames detection. Specifically, the zero-shot accuracy of ChatGPT exceeds that of crowd-workers for four out of five tasks, while ChatGPT's intercoder agreement exceeds that of both crowd-workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than 0.003−−abouttwentytimescheaperthanMTurk.Theseresultsshowthepotentialoflargelanguagemodelstodrasticallyincreasetheefficiencyoftextclassification.0.003 -- about twenty times cheaper than MTurk. These results show the potential of large language models to drastically increase the efficiency of text classification.0.003−−abouttwentytimescheaperthanMTurk.Theseresultsshowthepotentialoflargelanguagemodelstodrasticallyincreasetheefficiencyoftextclassification.

View on arXiv
Comments on this paper