ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.09860
  4. Cited By
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender
  Bias

Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias

20 June 2022
Yarden Tal
Inbal Magar
Roy Schwartz
ArXiv (abs)PDFHTML

Papers citing "Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias"

22 / 22 papers shown
Title
Quantifying Memorization Across Neural Language Models
Quantifying Memorization Across Neural Language Models
Nicholas Carlini
Daphne Ippolito
Matthew Jagielski
Katherine Lee
Florian Tramèr
Chiyuan Zhang
PILM
124
628
0
15 Feb 2022
Evaluating Gender Bias in Natural Language Inference
Evaluating Gender Bias in Natural Language Inference
Shanya Sharma
Manan Dey
Koustuv Sinha
50
41
0
12 May 2021
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean
  Crawled Corpus
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
Jesse Dodge
Maarten Sap
Ana Marasović
William Agnew
Gabriel Ilharco
Dirk Groeneveld
Margaret Mitchell
Matt Gardner
AILaw
118
448
0
18 Apr 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
43
94
0
24 Jan 2021
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
123
685
0
30 Sep 2020
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Pengcheng He
Xiaodong Liu
Jianfeng Gao
Weizhu Chen
AAML
161
2,747
0
05 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
835
42,332
0
28 May 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
101
1,011
0
20 Apr 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
450
20,298
0
23 Oct 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
671
24,528
0
26 Jul 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
106
451
0
18 Jun 2019
Evaluating Gender Bias in Machine Translation
Evaluating Gender Bias in Machine Translation
Gabriel Stanovsky
Noah A. Smith
Luke Zettlemoyer
91
406
0
03 Jun 2019
SuperGLUE: A Stickier Benchmark for General-Purpose Language
  Understanding Systems
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
274
2,315
0
02 May 2019
On Measuring Social Biases in Sentence Encoders
On Measuring Social Biases in Sentence Encoders
Chandler May
Alex Jinpeng Wang
Shikha Bordia
Samuel R. Bowman
Rachel Rudinger
99
603
0
25 Mar 2019
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field
  Language Model
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model
Alex Jinpeng Wang
Kyunghyun Cho
VLM
82
358
0
11 Feb 2019
Bias in Bios: A Case Study of Semantic Representation Bias in a
  High-Stakes Setting
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Maria De-Arteaga
Alexey Romanov
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Adam Tauman Kalai
191
460
0
27 Jan 2019
Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems
Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems
S. Kiritchenko
Saif M. Mohammad
FaML
86
439
0
11 May 2018
Gender Bias in Coreference Resolution
Gender Bias in Coreference Resolution
Rachel Rudinger
Jason Naradowsky
Brian Leonard
Benjamin Van Durme
72
644
0
25 Apr 2018
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Vicente Ordonez
Kai-Wei Chang
124
941
0
18 Apr 2018
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
Noam M. Shazeer
Mitchell Stern
ODL
84
1,051
0
11 Apr 2018
A Broad-Coverage Challenge Corpus for Sentence Understanding through
  Inference
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Adina Williams
Nikita Nangia
Samuel R. Bowman
524
4,492
0
18 Apr 2017
Semantics derived automatically from language corpora contain human-like
  biases
Semantics derived automatically from language corpora contain human-like biases
Aylin Caliskan
J. Bryson
Arvind Narayanan
215
2,673
0
25 Aug 2016
1