ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.01326
  4. Cited By
The Woman Worked as a Babysitter: On Biases in Language Generation

The Woman Worked as a Babysitter: On Biases in Language Generation

3 September 2019
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
ArXivPDFHTML

Papers citing "The Woman Worked as a Babysitter: On Biases in Language Generation"

21 / 121 papers shown
Title
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
31
80
0
07 Aug 2021
Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain
  Management
Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain Management
Cécile Logé
Emily L. Ross
D. Dadey
Saahil Jain
A. Saporta
A. Ng
Pranav Rajpurkar
10
22
0
03 Aug 2021
Improving Counterfactual Generation for Fair Hate Speech Detection
Improving Counterfactual Generation for Fair Hate Speech Detection
Aida Mostafazadeh Davani
Ali Omrani
Brendan Kennedy
M. Atari
Xiang Ren
Morteza Dehghani
30
9
0
03 Aug 2021
Anticipating Safety Issues in E2E Conversational AI: Framework and
  Tooling
Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling
Emily Dinan
Gavin Abercrombie
A. S. Bergman
Shannon L. Spruit
Dirk Hovy
Y-Lan Boureau
Verena Rieser
37
105
0
07 Jul 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
33
122
0
21 Jun 2021
Learning Knowledge Graph-based World Models of Textual Environments
Learning Knowledge Graph-based World Models of Textual Environments
Prithviraj Ammanabrolu
Mark O. Riedl
3DV
26
31
0
17 Jun 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavas
21
178
0
07 Jun 2021
Rethinking Search: Making Domain Experts out of Dilettantes
Rethinking Search: Making Domain Experts out of Dilettantes
Donald Metzler
Yi Tay
Dara Bahri
Marc Najork
LRM
30
46
0
05 May 2021
Inferring the Reader: Guiding Automated Story Generation with
  Commonsense Reasoning
Inferring the Reader: Guiding Automated Story Generation with Commonsense Reasoning
Xiangyu Peng
Siyan Li
Sarah Wiegreffe
Mark O. Riedl
LRM
50
38
0
04 May 2021
StylePTB: A Compositional Benchmark for Fine-grained Controllable Text
  Style Transfer
StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer
Yiwei Lyu
Paul Pu Liang
Hai Pham
Eduard H. Hovy
Barnabás Póczos
Ruslan Salakhutdinov
Louis-Philippe Morency
19
41
0
12 Apr 2021
How Many Data Points is a Prompt Worth?
How Many Data Points is a Prompt Worth?
Teven Le Scao
Alexander M. Rush
VLM
54
296
0
15 Mar 2021
Pretrained Transformers as Universal Computation Engines
Pretrained Transformers as Universal Computation Engines
Kevin Lu
Aditya Grover
Pieter Abbeel
Igor Mordatch
28
217
0
09 Mar 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
14
180
0
08 Feb 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
30
39
0
23 Jan 2021
PowerTransformer: Unsupervised Controllable Revision for Biased Language
  Correction
PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
Xinyao Ma
Maarten Sap
Hannah Rashkin
Yejin Choi
30
73
0
26 Oct 2020
Ethical behavior in humans and machines -- Evaluating training data
  quality for beneficial machine learning
Ethical behavior in humans and machines -- Evaluating training data quality for beneficial machine learning
Thilo Hagendorff
13
26
0
26 Aug 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
15
40,023
0
28 May 2020
Do Neural Ranking Models Intensify Gender Bias?
Do Neural Ranking Models Intensify Gender Bias?
Navid Rekabsaz
Markus Schedl
8
57
0
01 May 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
20
955
0
20 Apr 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
24
205
0
10 Nov 2019
Fair Generative Modeling via Weak Supervision
Fair Generative Modeling via Weak Supervision
Kristy Choi
Aditya Grover
Trisha Singh
Rui Shu
Stefano Ermon
30
134
0
26 Oct 2019
Previous
123