ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.00133
  4. Cited By
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

30 September 2020
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
ArXivPDFHTML

Papers citing "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models"

50 / 139 papers shown
Title
No Offense Taken: Eliciting Offensiveness from Language Models
No Offense Taken: Eliciting Offensiveness from Language Models
Anugya Srivastava
Rahul Ahuja
Rohith Mukku
16
3
0
02 Oct 2023
Foundation Metrics for Evaluating Effectiveness of Healthcare
  Conversations Powered by Generative AI
Foundation Metrics for Evaluating Effectiveness of Healthcare Conversations Powered by Generative AI
Mahyar Abbasian
Elahe Khatibi
Iman Azimi
David Oniani
Zahra Shakeri Hossein Abad
...
Bryant Lin
Olivier Gevaert
Li-Jia Li
Ramesh C. Jain
Amir M. Rahmani
LM&MA
ELM
AI4MH
43
66
0
21 Sep 2023
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
Nolan Dey
Daria Soboleva
Faisal Al-Khateeb
Bowen Yang
Ribhu Pathria
...
Robert Myers
Jacob Robert Steeves
Natalia Vassilieva
Marvin Tom
Joel Hestness
MoE
27
15
0
20 Sep 2023
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs
Patrick Haller
Ansar Aynetdinov
Alan Akbik
33
24
0
07 Sep 2023
Thesis Distillation: Investigating The Impact of Bias in NLP Models on
  Hate Speech Detection
Thesis Distillation: Investigating The Impact of Bias in NLP Models on Hate Speech Detection
Fatma Elsafoury
29
3
0
31 Aug 2023
Cultural Alignment in Large Language Models: An Explanatory Analysis
  Based on Hofstede's Cultural Dimensions
Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede's Cultural Dimensions
Reem I. Masoud
Ziquan Liu
Martin Ferianc
Philip C. Treleaven
Miguel R. D. Rodrigues
27
50
0
25 Aug 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
52
60
0
20 Aug 2023
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
  Two-Stage Approach to Mitigate Social Biases
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Yingji Li
Mengnan Du
Xin Wang
Ying Wang
53
26
0
04 Jul 2023
An Empirical Analysis of Parameter-Efficient Methods for Debiasing
  Pre-Trained Language Models
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models
Zhongbin Xie
Thomas Lukasiewicz
28
12
0
06 Jun 2023
Uncovering and Quantifying Social Biases in Code Generation
Uncovering and Quantifying Social Biases in Code Generation
Yong-Jin Liu
Xiaokang Chen
Yan Gao
Zhe Su
Fengji Zhang
Daoguang Zan
Jian-Guang Lou
Pin-Yu Chen
Tsung-Yi Ho
36
19
0
24 May 2023
An Efficient Multilingual Language Model Compression through Vocabulary
  Trimming
An Efficient Multilingual Language Model Compression through Vocabulary Trimming
Asahi Ushio
Yi Zhou
Jose Camacho-Collados
46
7
0
24 May 2023
Having Beer after Prayer? Measuring Cultural Bias in Large Language
  Models
Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
Tarek Naous
Michael Joseph Ryan
Alan Ritter
Wei-ping Xu
37
85
0
23 May 2023
Language-Agnostic Bias Detection in Language Models with Bias Probing
Language-Agnostic Bias Detection in Language Models with Bias Probing
Abdullatif Köksal
Omer F. Yalcin
Ahmet Akbiyik
M. Kilavuz
Anna Korhonen
Hinrich Schütze
41
1
0
22 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
43
9
0
22 May 2023
Comparing Biases and the Impact of Multilingual Training across Multiple
  Languages
Comparing Biases and the Impact of Multilingual Training across Multiple Languages
Sharon Levy
Neha Ann John
Ling Liu
Yogarshi Vyas
Jie Ma
Yoshinari Fujinuma
Miguel Ballesteros
Vittorio Castelli
Dan Roth
26
26
0
18 May 2023
A Better Way to Do Masked Language Model Scoring
A Better Way to Do Masked Language Model Scoring
Carina Kauf
Anna A. Ivanova
47
22
0
17 May 2023
On the Origins of Bias in NLP through the Lens of the Jim Code
On the Origins of Bias in NLP through the Lens of the Jim Code
Fatma Elsafoury
Gavin Abercrombie
47
4
0
16 May 2023
Constructing Holistic Measures for Social Biases in Masked Language Models
Yang Liu
Yuexian Hou
23
0
0
12 May 2023
Evaluation of Social Biases in Recent Large Pre-Trained Models
Evaluation of Social Biases in Recent Large Pre-Trained Models
Swapnil Sharma
Nikita Anand
V. KranthiKiranG.
Alind Jain
26
0
0
13 Apr 2023
Assessing Language Model Deployment with Risk Cards
Assessing Language Model Deployment with Risk Cards
Leon Derczynski
Hannah Rose Kirk
Vidhisha Balachandran
Sachin Kumar
Yulia Tsvetkov
M. Leiser
Saif Mohammad
30
42
0
31 Mar 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence
  Reasoning
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
29
7
0
10 Mar 2023
LLaMA: Open and Efficient Foundation Language Models
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
Thibaut Lavril
Gautier Izacard
Xavier Martinet
Marie-Anne Lachaux
...
Faisal Azhar
Aurelien Rodriguez
Armand Joulin
Edouard Grave
Guillaume Lample
ALM
PILM
61
12,368
0
27 Feb 2023
LaSER: Language-Specific Event Recommendation
LaSER: Language-Specific Event Recommendation
Sara Abdollahi
Simon Gottschalk
Elena Demidova
24
4
0
24 Feb 2023
In-Depth Look at Word Filling Societal Bias Measures
In-Depth Look at Word Filling Societal Bias Measures
Matúš Pikuliak
Ivana Benová
Viktor Bachratý
29
9
0
24 Feb 2023
Auditing large language models: a three-layered approach
Auditing large language models: a three-layered approach
Jakob Mokander
Jonas Schuett
Hannah Rose Kirk
Luciano Floridi
AILaw
MLAU
48
196
0
16 Feb 2023
Learning to Initialize: Can Meta Learning Improve Cross-task
  Generalization in Prompt Tuning?
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?
Chengwei Qin
Q. Li
Ruochen Zhao
Chenyu You
VLM
LRM
23
15
0
16 Feb 2023
The Capacity for Moral Self-Correction in Large Language Models
The Capacity for Moral Self-Correction in Large Language Models
Deep Ganguli
Amanda Askell
Nicholas Schiefer
Thomas I. Liao
Kamil.e Lukovsiut.e
...
Tom B. Brown
C. Olah
Jack Clark
Sam Bowman
Jared Kaplan
LRM
ReLM
45
159
0
15 Feb 2023
Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous
  Pronouns
Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns
Zhongbin Xie
Vid Kocijan
Thomas Lukasiewicz
Oana-Maria Camburu
10
2
0
11 Feb 2023
The Gradient of Generative AI Release: Methods and Considerations
The Gradient of Generative AI Release: Methods and Considerations
Irene Solaiman
33
98
0
05 Feb 2023
FineDeb: A Debiasing Framework for Language Models
FineDeb: A Debiasing Framework for Language Models
Akash Saravanan
Dhruv Mullick
Habibur Rahman
Nidhi Hegde
FedML
AI4CE
23
4
0
05 Feb 2023
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark
  Datasets
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Tosin P. Adewumi
Isabella Sodergren
Lama Alkhaled
Sana Sabah Sabry
F. Liwicki
Marcus Liwicki
41
4
0
28 Jan 2023
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution
  Generalization of VQA Models
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA Models
Ali Borji
CoGe
15
1
0
28 Jan 2023
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
32
10
0
20 Dec 2022
Language model acceptability judgements are not always robust to context
Language model acceptability judgements are not always robust to context
Koustuv Sinha
Jon Gauthier
Aaron Mueller
Kanishka Misra
Keren Fuentes
R. Levy
Adina Williams
23
17
0
18 Dec 2022
Manifestations of Xenophobia in AI Systems
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
28
6
0
15 Nov 2022
Casual Conversations v2: Designing a large consent-driven dataset to
  measure algorithmic bias and robustness
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
C. Hazirbas
Yejin Bang
Tiezheng Yu
Parisa Assar
Bilal Porgali
...
Jacqueline Pan
Emily McReynolds
Miranda Bogen
Pascale Fung
Cristian Canton Ferrer
32
8
0
10 Nov 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
118
2,315
0
09 Nov 2022
Detecting Unintended Social Bias in Toxic Language Datasets
Detecting Unintended Social Bias in Toxic Language Datasets
Nihar Ranjan Sahoo
Himanshu Gupta
P. Bhattacharyya
18
18
0
21 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
  Benchmarks
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Selvam
Sunipa Dev
Daniel Khashabi
Tushar Khot
Kai-Wei Chang
ALM
24
25
0
18 Oct 2022
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for
  Text Generation
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Tianxiang Sun
Junliang He
Xipeng Qiu
Xuanjing Huang
24
44
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
30
25
0
13 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and
  their Social Biases in Downstream Tasks
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
28
41
0
06 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
253
1,073
0
05 Oct 2022
Re-contextualizing Fairness in NLP: The Case of India
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
Sunipa Dev
Partha P. Talukdar
Shachi Dave
Vinodkumar Prabhakaran
32
54
0
25 Sep 2022
Few-shot Adaptation Works with UnpredicTable Data
Few-shot Adaptation Works with UnpredicTable Data
Jun Shern Chan
Michael Pieler
Jonathan Jao
Jérémy Scheurer
Ethan Perez
31
5
0
01 Aug 2022
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large
  Language Models
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
29
8
0
23 Jun 2022
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender
  Bias
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias
Yarden Tal
Inbal Magar
Roy Schwartz
19
34
0
20 Jun 2022
Characteristics of Harmful Text: Towards Rigorous Benchmarking of
  Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Maribeth Rauh
John F. J. Mellor
J. Uesato
Po-Sen Huang
Johannes Welbl
...
Amelia Glaese
G. Irving
Iason Gabriel
William S. Isaac
Lisa Anne Hendricks
33
49
0
16 Jun 2022
Previous
123
Next