ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.07997
  4. Cited By
Annotators with Attitudes: How Annotator Beliefs And Identities Bias
  Toxic Language Detection

Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection

15 November 2021
Maarten Sap
Swabha Swayamdipta
Laura Vianna
Xuhui Zhou
Yejin Choi
Noah A. Smith
ArXivPDFHTML

Papers citing "Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection"

15 / 165 papers shown
Title
KOLD: Korean Offensive Language Dataset
KOLD: Korean Offensive Language Dataset
Young-kuk Jeong
Juhyun Oh
Jaimeen Ahn
Jongwon Lee
Jihyung Mon
Sungjoon Park
Alice H. Oh
54
25
0
23 May 2022
Mitigating Toxic Degeneration with Empathetic Data: Exploring the
  Relationship Between Toxicity and Empathy
Mitigating Toxic Degeneration with Empathetic Data: Exploring the Relationship Between Toxicity and Empathy
Allison Lahnala
Charles F Welch
Béla Neuendorf
Lucie Flek
59
13
0
15 May 2022
Handling and Presenting Harmful Text in NLP Research
Handling and Presenting Harmful Text in NLP Research
Hannah Rose Kirk
Abeba Birhane
Bertie Vidgen
Leon Derczynski
13
47
0
29 Apr 2022
Experimental Standards for Deep Learning in Natural Language Processing
  Research
Experimental Standards for Deep Learning in Natural Language Processing Research
Dennis Ulmer
Elisa Bassignana
Max Müller-Eberstein
Daniel Varab
Mike Zhang
Rob van der Goot
Christian Hardmeier
Barbara Plank
19
10
0
13 Apr 2022
Mix and Match: Learning-free Controllable Text Generation using Energy
  Language Models
Mix and Match: Learning-free Controllable Text Generation using Energy Language Models
Fatemehsadat Mireshghallah
Kartik Goyal
Taylor Berg-Kirkpatrick
36
78
0
24 Mar 2022
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and
  Implicit Hate Speech Detection
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
Thomas Hartvigsen
Saadia Gabriel
Hamid Palangi
Maarten Sap
Dipankar Ray
Ece Kamar
24
347
0
17 Mar 2022
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and
  Benchmarks
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Jingyan Zhou
Deng Jiawen
Fei Mi
Yitong Li
Yasheng Wang
Minlie Huang
Xin Jiang
Qun Liu
Helen Meng
27
31
0
16 Feb 2022
Describing Differences between Text Distributions with Natural Language
Describing Differences between Text Distributions with Natural Language
Ruiqi Zhong
Charles Burton Snell
Dan Klein
Jacob Steinhardt
VLM
132
42
0
28 Jan 2022
Whose Language Counts as High Quality? Measuring Language Ideologies in
  Text Data Selection
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Suchin Gururangan
Dallas Card
Sarah K. Drier
E. K. Gade
Leroy Z. Wang
Zeyu Wang
Luke Zettlemoyer
Noah A. Smith
175
73
0
25 Jan 2022
Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks
Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks
Paul Röttger
Bertie Vidgen
Dirk Hovy
J. Pierrehumbert
18
11
0
14 Dec 2021
Can Machines Learn Morality? The Delphi Experiment
Can Machines Learn Morality? The Delphi Experiment
Liwei Jiang
Jena D. Hwang
Chandra Bhagavatula
Ronan Le Bras
Jenny T Liang
...
Yulia Tsvetkov
Oren Etzioni
Maarten Sap
Regina A. Rini
Yejin Choi
FaML
127
111
0
14 Oct 2021
Introducing an Abusive Language Classification Framework for Telegram to
  Investigate the German Hater Community
Introducing an Abusive Language Classification Framework for Telegram to Investigate the German Hater Community
Maximilian Wich
Adrianna Górniak
Tobias Eder
Daniel Bartmann
Burak Enes cCakici
Georg Groh
17
12
0
15 Sep 2021
Misinfo Reaction Frames: Reasoning about Readers' Reactions to News
  Headlines
Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines
Saadia Gabriel
Skyler Hallinan
Maarten Sap
Pemi Nguyen
Franziska Roesner
Eunsol Choi
Yejin Choi
16
40
0
18 Apr 2021
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Zeerak Talat
Smarika Lulz
Joachim Bingel
Isabelle Augenstein
96
51
0
28 Jan 2021
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
242
320
0
21 Aug 2019
Previous
1234