ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.03529
  4. Cited By
Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating
  Toxic Text Datasets

Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets

7 December 2021
Kofi Arhin
Ioana Baldini
Dennis L. Wei
Karthikeyan N. Ramamurthy
Moninder Singh
ArXivPDFHTML

Papers citing "Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets"

12 / 12 papers shown
Title
Combating Toxic Language: A Review of LLM-Based Strategies for Software Engineering
Combating Toxic Language: A Review of LLM-Based Strategies for Software Engineering
Hao Zhuo
Yicheng Yang
Kewen Peng
30
0
0
21 Apr 2025
Whose Preferences? Differences in Fairness Preferences and Their Impact
  on the Fairness of AI Utilizing Human Feedback
Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback
Emilia Agis Lerner
Florian E. Dorner
Elliott Ash
Naman Goel
38
1
0
09 Jun 2024
Detectors for Safe and Reliable LLMs: Implementations, Uses, and
  Limitations
Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Swapnaja Achintalwar
Adriana Alvarado Garcia
Ateret Anaby-Tavor
Ioana Baldini
Sara E. Berger
...
Aashka Trivedi
Kush R. Varshney
Dennis L. Wei
Shalisha Witherspooon
Marcel Zalmanovici
38
10
0
09 Mar 2024
SoUnD Framework: Analyzing (So)cial Representation in (Un)structured
  (D)ata
SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata
Mark Díaz
Sunipa Dev
Emily Reif
Remi Denton
Vinodkumar Prabhakaran
33
3
0
28 Nov 2023
Annotation Sensitivity: Training Data Collection Methods Affect Model
  Performance
Annotation Sensitivity: Training Data Collection Methods Affect Model Performance
Christoph Kern
Stephanie Eckman
Jacob Beck
Rob Chew
Bolei Ma
Frauke Kreuter
29
9
0
23 Nov 2023
GRASP: A Disagreement Analysis Framework to Assess Group Associations in
  Perspectives
GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives
Vinodkumar Prabhakaran
Christopher Homan
Lora Aroyo
Aida Mostafazadeh Davani
Alicia Parrish
Alex S. Taylor
Mark Díaz
Ding Wang
Greg Serapio-García
45
9
0
09 Nov 2023
Modeling subjectivity (by Mimicking Annotator Annotation) in toxic
  comment identification across diverse communities
Modeling subjectivity (by Mimicking Annotator Annotation) in toxic comment identification across diverse communities
Senjuti Dutta
Sid Mittal
Sherol Chen
Deepak Ramachandran
Ravi Rajakumar
Ian D Kivlichan
Sunny Mak
Alena Butryna
Praveen Paritosh University of Tennessee
48
5
0
01 Nov 2023
DICES Dataset: Diversity in Conversational AI Evaluation for Safety
DICES Dataset: Diversity in Conversational AI Evaluation for Safety
Lora Aroyo
Alex S. Taylor
Mark Díaz
Christopher Homan
Alicia Parrish
Greg Serapio-García
Vinodkumar Prabhakaran
Ding Wang
32
33
0
20 Jun 2023
Critical Perspectives: A Benchmark Revealing Pitfalls in PerspectiveAPI
Critical Perspectives: A Benchmark Revealing Pitfalls in PerspectiveAPI
Lorena Piedras
Lucas Rosenblatt
Julia Wilkins
37
9
0
05 Jan 2023
The Risks of Machine Learning Systems
The Risks of Machine Learning Systems
Samson Tan
Araz Taeihagh
K. Baxter
17
5
0
21 Apr 2022
Annotators with Attitudes: How Annotator Beliefs And Identities Bias
  Toxic Language Detection
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection
Maarten Sap
Swabha Swayamdipta
Laura Vianna
Xuhui Zhou
Yejin Choi
Noah A. Smith
46
268
0
15 Nov 2021
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
242
321
0
21 Aug 2019
1