ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.04508
  4. Cited By
Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems

Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems

11 May 2018
S. Kiritchenko
Saif M. Mohammad
    FaML
ArXivPDFHTML

Papers citing "Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems"

50 / 250 papers shown
Title
Identification of Bias Against People with Disabilities in Sentiment
  Analysis and Toxicity Detection Models
Identification of Bias Against People with Disabilities in Sentiment Analysis and Toxicity Detection Models
Pranav Narayanan Venkit
Shomir Wilson
14
25
0
25 Nov 2021
Assessing gender bias in medical and scientific masked language models
  with StereoSet
Assessing gender bias in medical and scientific masked language models with StereoSet
Robert Robinson
17
3
0
15 Nov 2021
Contrastive Clustering: Toward Unsupervised Bias Reduction for Emotion
  and Sentiment Classification
Contrastive Clustering: Toward Unsupervised Bias Reduction for Emotion and Sentiment Classification
Jared Mowery
17
2
0
14 Nov 2021
Modeling Techniques for Machine Learning Fairness: A Survey
Modeling Techniques for Machine Learning Fairness: A Survey
Mingyang Wan
Daochen Zha
Ninghao Liu
Na Zou
SyDa
FaML
30
36
0
04 Nov 2021
Robust Contrastive Learning Using Negative Samples with Diminished
  Semantics
Robust Contrastive Learning Using Negative Samples with Diminished Semantics
Songwei Ge
Shlok Kumar Mishra
Haohan Wang
Chun-Liang Li
David Jacobs
SSL
24
71
0
27 Oct 2021
Influence Tuning: Demoting Spurious Correlations via Instance
  Attribution and Instance-Driven Updates
Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates
Xiaochuang Han
Yulia Tsvetkov
TDI
28
30
0
07 Oct 2021
Multi-Objective Few-shot Learning for Fair Classification
Multi-Objective Few-shot Learning for Fair Classification
Ishani Mondal
Procheta Sen
Debasis Ganguly
FaML
13
4
0
05 Oct 2021
Enhancing Model Robustness and Fairness with Causality: A Regularization
  Approach
Enhancing Model Robustness and Fairness with Causality: A Regularization Approach
Zhao Wang
Kai Shu
A. Culotta
OOD
21
14
0
03 Oct 2021
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in
  NLP Systems through an Intersectional Lens
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens
Saad Hassan
Matt Huenerfauth
Cecilia Ovesdotter Alm
46
38
0
01 Oct 2021
Stepmothers are mean and academics are pretentious: What do pretrained
  language models learn about you?
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?
Rochelle Choenni
Ekaterina Shutova
R. Rooij
88
29
0
21 Sep 2021
Adversarial Scrubbing of Demographic Information for Text Classification
Adversarial Scrubbing of Demographic Information for Text Classification
Somnath Basu Roy Chowdhury
Sayan Ghosh
Yiyuan Li
Junier B. Oliva
Shashank Srivastava
Snigdha Chaturvedi
42
14
0
17 Sep 2021
Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis
Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis
Saif M. Mohammad
21
67
0
17 Sep 2021
On the validity of pre-trained transformers for natural language
  processing in the software engineering domain
On the validity of pre-trained transformers for natural language processing in the software engineering domain
Julian von der Mosel
Alexander Trautsch
Steffen Herbold
40
67
0
10 Sep 2021
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution
  and Machine Translation
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
Shahar Levy
Koren Lazar
Gabriel Stanovsky
26
64
0
08 Sep 2021
A Generative Approach for Mitigating Structural Biases in Natural
  Language Inference
A Generative Approach for Mitigating Structural Biases in Natural Language Inference
Dimion Asael
Zachary M. Ziegler
Yonatan Belinkov
6
8
0
31 Aug 2021
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
31
80
0
07 Aug 2021
GENder-IT: An Annotated English-Italian Parallel Challenge Set for
  Cross-Linguistic Natural Gender Phenomena
GENder-IT: An Annotated English-Italian Parallel Challenge Set for Cross-Linguistic Natural Gender Phenomena
Eva Vanmassenhove
J. Monti
29
14
0
05 Aug 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Ethics Sheets for AI Tasks
Ethics Sheets for AI Tasks
Saif M. Mohammad
14
31
0
02 Jul 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
21
104
0
28 Jun 2021
A Source-Criticism Debiasing Method for GloVe Embeddings
A Source-Criticism Debiasing Method for GloVe Embeddings
Hope McGovern
30
3
0
25 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
36
122
0
21 Jun 2021
pysentimiento: A Python Toolkit for Opinion Mining and Social NLP tasks
pysentimiento: A Python Toolkit for Opinion Mining and Social NLP tasks
Juan Manuel Pérez
Mariela Rajngewerc
Juan Carlos Giudici
D. Furman
Franco Luque
Laura Alonso Alemany
María Vanina Martínez
18
29
0
17 Jun 2021
Evaluating Gender Bias in Hindi-English Machine Translation
Evaluating Gender Bias in Hindi-English Machine Translation
Gauri Gupta
Krithika Ramesh
Sanjay Singh
19
22
0
16 Jun 2021
Understanding and Evaluating Racial Biases in Image Captioning
Understanding and Evaluating Racial Biases in Image Captioning
Dora Zhao
Angelina Wang
Olga Russakovsky
19
134
0
16 Jun 2021
Learning Stable Classifiers by Transferring Unstable Features
Learning Stable Classifiers by Transferring Unstable Features
Yujia Bao
Shiyu Chang
Regina Barzilay
OOD
27
8
0
15 Jun 2021
Towards Equity and Algorithmic Fairness in Student Grade Prediction
Towards Equity and Algorithmic Fairness in Student Grade Prediction
Weijie Jiang
Z. Pardos
FaML
31
47
0
14 May 2021
What's in the Box? A Preliminary Analysis of Undesirable Content in the
  Common Crawl Corpus
What's in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus
A. Luccioni
J. Viviano
24
113
0
06 May 2021
Detoxifying Language Models Risks Marginalizing Minority Voices
Detoxifying Language Models Risks Marginalizing Minority Voices
Albert Xu
Eshaan Pathak
Eric Wallace
Suchin Gururangan
Maarten Sap
Dan Klein
13
121
0
13 Apr 2021
Double Perturbation: On the Robustness of Robustness and Counterfactual
  Bias Evaluation
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation
Chong Zhang
Jieyu Zhao
Huan Zhang
Kai-Wei Chang
Cho-Jui Hsieh
AAML
18
10
0
12 Apr 2021
What Will it Take to Fix Benchmarking in Natural Language Understanding?
What Will it Take to Fix Benchmarking in Natural Language Understanding?
Samuel R. Bowman
George E. Dahl
ELM
ALM
30
156
0
05 Apr 2021
FairFil: Contrastive Neural Debiasing Method for Pretrained Text
  Encoders
FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders
Pengyu Cheng
Weituo Hao
Siyang Yuan
Shijing Si
Lawrence Carin
25
100
0
11 Mar 2021
Towards generalisable hate speech detection: a review on obstacles and
  solutions
Towards generalisable hate speech detection: a review on obstacles and solutions
Wenjie Yin
A. Zubiaga
117
164
0
17 Feb 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
19
180
0
08 Feb 2021
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
  Generation
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Jwala Dhamala
Tony Sun
Varun Kumar
Satyapriya Krishna
Yada Pruksachatkun
Kai-Wei Chang
Rahul Gupta
11
371
0
27 Jan 2021
Diverse Adversaries for Mitigating Bias in Training
Diverse Adversaries for Mitigating Bias in Training
Xudong Han
Timothy Baldwin
Trevor Cohn
11
61
0
25 Jan 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
6
88
0
24 Jan 2021
Fairness in Machine Learning
Fairness in Machine Learning
L. Oneto
Silvia Chiappa
FaML
256
488
0
31 Dec 2020
DynaSent: A Dynamic Benchmark for Sentiment Analysis
DynaSent: A Dynamic Benchmark for Sentiment Analysis
Christopher Potts
Zhengxuan Wu
Atticus Geiger
Douwe Kiela
230
77
0
30 Dec 2020
Robustness to Spurious Correlations in Text Classification via
  Automatically Generated Counterfactuals
Robustness to Spurious Correlations in Text Classification via Automatically Generated Counterfactuals
Zhao Wang
A. Culotta
CML
OOD
12
98
0
18 Dec 2020
Fairness and Robustness in Invariant Learning: A Case Study in Toxicity
  Classification
Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification
Robert Adragna
Elliot Creager
David Madras
R. Zemel
OOD
FaML
29
41
0
12 Nov 2020
Investigating Societal Biases in a Poetry Composition System
Investigating Societal Biases in a Poetry Composition System
Emily Sheng
David C. Uthus
19
52
0
05 Nov 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
17
122
0
27 Oct 2020
On Transferability of Bias Mitigation Effects in Language Model
  Fine-Tuning
On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning
Xisen Jin
Francesco Barbieri
Brendan Kennedy
Aida Mostafazadeh Davani
Leonardo Neves
Xiang Ren
35
5
0
24 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
252
426
0
15 Oct 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
12
250
0
12 Oct 2020
LOGAN: Local Group Bias Detection by Clustering
LOGAN: Local Group Bias Detection by Clustering
Jieyu Zhao
Kai-Wei Chang
13
14
0
06 Oct 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
26
27
0
06 Oct 2020
Explaining The Efficacy of Counterfactually Augmented Data
Explaining The Efficacy of Counterfactually Augmented Data
Divyansh Kaushik
Amrith Rajagopal Setlur
Eduard H. Hovy
Zachary Chase Lipton
CML
20
81
0
05 Oct 2020
Fairness in Machine Learning: A Survey
Fairness in Machine Learning: A Survey
Simon Caton
C. Haas
FaML
32
615
0
04 Oct 2020
Previous
12345
Next