ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.14088
  4. Cited By
Demographics Should Not Be the Reason of Toxicity: Mitigating
  Discrimination in Text Classifications with Instance Weighting

Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting

29 April 2020
Guanhua Zhang
Bing Bai
Junqi Zhang
Kun Bai
Conghui Zhu
T. Zhao
ArXivPDFHTML

Papers citing "Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting"

14 / 14 papers shown
Title
Target Span Detection for Implicit Harmful Content
Target Span Detection for Implicit Harmful Content
Nazanin Jafari
James Allan
Sheikh Muhammad Sarwar
43
1
0
28 Mar 2024
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language
  Models
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Yueqing Liang
Lu Cheng
Ali Payani
Kai Shu
28
3
0
15 Nov 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
52
59
0
20 Aug 2023
An Invariant Learning Characterization of Controlled Text Generation
An Invariant Learning Characterization of Controlled Text Generation
Carolina Zheng
Claudia Shi
Keyon Vafa
Amir Feder
David M. Blei
OOD
38
8
0
31 May 2023
Should We Attend More or Less? Modulating Attention for Fairness
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Gonçalo Mordido
Samira Shabanian
Sarath Chandar
37
10
0
22 May 2023
TCAB: A Large-Scale Text Classification Attack Benchmark
TCAB: A Large-Scale Text Classification Attack Benchmark
Kalyani Asthana
Zhouhang Xie
Wencong You
Adam Noack
Jonathan Brophy
Sameer Singh
Daniel Lowd
39
3
0
21 Oct 2022
Fairness Reprogramming
Fairness Reprogramming
Guanhua Zhang
Yihua Zhang
Yang Zhang
Wenqi Fan
Qing Li
Sijia Liu
Shiyu Chang
AAML
83
38
0
21 Sep 2022
Toward Understanding Bias Correlations for Mitigation in NLP
Toward Understanding Bias Correlations for Mitigation in NLP
Lu Cheng
Suyu Ge
Huan Liu
36
8
0
24 May 2022
Easy Adaptation to Mitigate Gender Bias in Multilingual Text
  Classification
Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification
Xiaolei Huang
FaML
13
8
0
12 Apr 2022
Handling Bias in Toxic Speech Detection: A Survey
Handling Bias in Toxic Speech Detection: A Survey
Tanmay Garg
Sarah Masud
Tharun Suresh
Tanmoy Chakraborty
17
91
0
26 Jan 2022
Enhancing Model Robustness and Fairness with Causality: A Regularization
  Approach
Enhancing Model Robustness and Fairness with Causality: A Regularization Approach
Zhao Wang
Kai Shu
A. Culotta
OOD
21
14
0
03 Oct 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
36
122
0
21 Jun 2021
Why Attentions May Not Be Interpretable?
Why Attentions May Not Be Interpretable?
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Fei-Yue Wang
FAtt
20
56
0
10 Jun 2020
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,087
0
24 Oct 2016
1