ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.00133
  4. Cited By
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

30 September 2020
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
ArXivPDFHTML

Papers citing "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models"

39 / 139 papers shown
Title
What Changed? Investigating Debiasing Methods using Causal Mediation
  Analysis
What Changed? Investigating Debiasing Methods using Causal Mediation Analysis
Su-Ha Jeoung
Jana Diesner
CML
21
7
0
01 Jun 2022
Eliciting and Understanding Cross-Task Skills with Task-Level
  Mixture-of-Experts
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Qinyuan Ye
Juan Zha
Xiang Ren
MoE
18
12
0
25 May 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
79
130
0
18 May 2022
OPT: Open Pre-trained Transformer Language Models
OPT: Open Pre-trained Transformer Language Models
Susan Zhang
Stephen Roller
Naman Goyal
Mikel Artetxe
Moya Chen
...
Daniel Simig
Punit Singh Koura
Anjali Sridhar
Tianlu Wang
Luke Zettlemoyer
VLM
OSLM
AI4CE
76
3,522
0
02 May 2022
Detoxifying Language Models with a Toxic Corpus
Detoxifying Language Models with a Toxic Corpus
Yoon A Park
Frank Rudzicz
27
6
0
30 Apr 2022
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained
  Language Models with Prompts
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts
Apoorv Garg
Deval Srivastava
Zhiyang Xu
Lifu Huang
16
5
0
15 Apr 2022
Fair and Argumentative Language Modeling for Computational Argumentation
Fair and Argumentative Language Modeling for Computational Argumentation
Carolin Holtermann
Anne Lauscher
Simone Paolo Ponzetto
19
21
0
08 Apr 2022
Probing Pre-Trained Language Models for Cross-Cultural Differences in
  Values
Probing Pre-Trained Language Models for Cross-Cultural Differences in Values
Arnav Arora
Lucie-Aimée Kaffee
Isabelle Augenstein
VLM
43
124
0
25 Mar 2022
minicons: Enabling Flexible Behavioral and Representational Analyses of
  Transformer Language Models
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
Kanishka Misra
19
58
0
24 Mar 2022
Mitigating Gender Bias in Distilled Language Models via Counterfactual
  Role Reversal
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal
Umang Gupta
Jwala Dhamala
Varun Kumar
Apurv Verma
Yada Pruksachatkun
Satyapriya Krishna
Rahul Gupta
Kai-Wei Chang
Greg Ver Steeg
Aram Galstyan
21
49
0
23 Mar 2022
Sense Embeddings are also Biased--Evaluating Social Biases in Static and
  Contextualised Sense Embeddings
Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings
Yi Zhou
Masahiro Kaneko
Danushka Bollegala
29
23
0
14 Mar 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language
  Models
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
31
6
0
10 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
369
12,003
0
04 Mar 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
30
110
0
28 Dec 2021
Efficient Large Scale Language Modeling with Mixtures of Experts
Efficient Large Scale Language Modeling with Mixtures of Experts
Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
...
Jeff Wang
Luke Zettlemoyer
Mona T. Diab
Zornitsa Kozareva
Ves Stoyanov
MoE
61
188
0
20 Dec 2021
Few-shot Learning with Multilingual Language Models
Few-shot Learning with Multilingual Language Models
Xi Lin
Todor Mihaylov
Mikel Artetxe
Tianlu Wang
Shuohui Chen
...
Luke Zettlemoyer
Zornitsa Kozareva
Mona T. Diab
Ves Stoyanov
Xian Li
BDL
ELM
LRM
64
286
0
20 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
19
24
0
14 Dec 2021
MetaICL: Learning to Learn In Context
MetaICL: Learning to Learn In Context
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
LRM
70
470
0
29 Oct 2021
An Empirical Survey of the Effectiveness of Debiasing Techniques for
  Pre-trained Language Models
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Nicholas Meade
Elinor Poole-Dayan
Siva Reddy
22
123
0
16 Oct 2021
Fairness-aware Class Imbalanced Learning
Fairness-aware Class Imbalanced Learning
Shivashankar Subramanian
Afshin Rahimi
Timothy Baldwin
Trevor Cohn
Lea Frermann
FaML
109
28
0
21 Sep 2021
Evaluating Debiasing Techniques for Intersectional Biases
Evaluating Debiasing Techniques for Intersectional Biases
Shivashankar Subramanian
Xudong Han
Timothy Baldwin
Trevor Cohn
Lea Frermann
110
48
0
21 Sep 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavas
55
120
0
08 Sep 2021
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
31
80
0
07 Aug 2021
Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain
  Management
Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain Management
Cécile Logé
Emily L. Ross
D. Dadey
Saahil Jain
A. Saporta
A. Ng
Pranav Rajpurkar
24
22
0
03 Aug 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
21
104
0
28 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
42
122
0
21 Jun 2021
A Discussion on Building Practical NLP Leaderboards: The Case of Machine
  Translation
A Discussion on Building Practical NLP Leaderboards: The Case of Machine Translation
Sebastin Santy
Prasanta Bhattacharya
LLMAG
33
2
0
11 Jun 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
45
178
0
07 Jun 2021
Understanding and Countering Stereotypes: A Computational Approach to
  the Stereotype Content Model
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
19
37
0
04 Jun 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
19
19
0
12 May 2021
Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing
Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing
Boaz Shmueli
Jan Fell
Soumya Ray
Lun-Wei Ku
112
86
0
20 Apr 2021
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in
  NLP
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
223
180
0
18 Apr 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
218
137
0
23 Jan 2021
When Do You Need Billions of Words of Pretraining Data?
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
29
136
0
10 Nov 2020
Towards Ethics by Design in Online Abusive Content Detection
Towards Ethics by Design in Online Abusive Content Detection
S. Kiritchenko
I. Nejadgholi
21
13
0
28 Oct 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
22
122
0
01 May 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
37
957
0
20 Apr 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
32
205
0
10 Nov 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
123