Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.09456
Cited By
StereoSet: Measuring stereotypical bias in pretrained language models
20 April 2020
Moin Nadeem
Anna Bethke
Siva Reddy
Re-assign community
ArXiv
PDF
HTML
Papers citing
"StereoSet: Measuring stereotypical bias in pretrained language models"
9 / 159 papers shown
Title
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
218
138
0
23 Jan 2021
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
77
670
0
06 Nov 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
20
645
0
30 Sep 2020
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Kawin Ethayarajh
Dan Jurafsky
ELM
24
51
0
29 Sep 2020
Critical Thinking for Language Models
Gregor Betz
Christian Voigt
Kyle Richardson
SyDa
ReLM
LRM
AI4CE
18
35
0
15 Sep 2020
On Robustness and Bias Analysis of BERT-based Relation Extraction
Luoqiu Li
Xiang Chen
Hongbin Ye
Zhen Bi
Shumin Deng
Ningyu Zhang
Huajun Chen
32
18
0
14 Sep 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
35
40,023
0
28 May 2020
DQI: Measuring Data Quality in NLP
Swaroop Mishra
Anjana Arunkumar
Bhavdeep Singh Sachdeva
Chris Bryan
Chitta Baral
36
30
0
02 May 2020
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
223
616
0
03 Sep 2019
Previous
1
2
3
4