ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.14663
  4. Cited By
You Are What You Annotate: Towards Better Models through Annotator
  Representations

You Are What You Annotate: Towards Better Models through Annotator Representations

24 May 2023
Naihao Deng
Xinliang Frederick Zhang
Siyang Liu
Winston Wu
Lu Wang
Rada Mihalcea
ArXivPDFHTML

Papers citing "You Are What You Annotate: Towards Better Models through Annotator Representations"

6 / 6 papers shown
Title
Is LLM an Overconfident Judge? Unveiling the Capabilities of LLMs in Detecting Offensive Language with Annotation Disagreement
Is LLM an Overconfident Judge? Unveiling the Capabilities of LLMs in Detecting Offensive Language with Annotation Disagreement
Junyu Lu
Kai Ma
Kaichun Wang
Kelaiti Xiao
Roy Ka-Wei Lee
Bo Xu
Liang Yang
Hongfei Lin
51
0
0
10 Feb 2025
Training and Evaluating with Human Label Variation: An Empirical Study
Training and Evaluating with Human Label Variation: An Empirical Study
Kemal Kurniawan
Meladel Mistica
Timothy Baldwin
Jey Han Lau
67
0
0
03 Feb 2025
Cost-Efficient Subjective Task Annotation and Modeling through Few-Shot
  Annotator Adaptation
Cost-Efficient Subjective Task Annotation and Modeling through Few-Shot Annotator Adaptation
Preni Golazizian
Ali Omrani
Alireza S. Ziabari
Morteza Dehghani
25
1
0
21 Feb 2024
GRASP: A Disagreement Analysis Framework to Assess Group Associations in
  Perspectives
GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives
Vinodkumar Prabhakaran
Christopher Homan
Lora Aroyo
Aida Mostafazadeh Davani
Alicia Parrish
Alex S. Taylor
Mark Díaz
Ding Wang
Greg Serapio-García
37
9
0
09 Nov 2023
Agreeing to Disagree: Annotating Offensive Language Datasets with
  Annotators' Disagreement
Agreeing to Disagree: Annotating Offensive Language Datasets with Annotators' Disagreement
Elisa Leonardelli
Stefano Menini
Alessio Palmero Aprosio
Marco Guerini
Sara Tonelli
52
97
0
28 Sep 2021
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
242
320
0
21 Aug 2019
1